Monitoring the output of large-scale language models (LLMs) is crucial to mitigating risks from misuse and misalignment. In this paper, we evaluate the potential for LLMs to evade monitoring by encoding hidden information within seemingly harmful artifacts, known as stellanography. Our primary focus is on two types of stellanography: encoded message passing and encoded inference execution.