This paper presents a novel adversarial attack method against toxicity detection models that exploits a vulnerability in language models' ability to interpret spatially structured text in ASCII art format. We propose ToxASCII, a benchmark for evaluating the robustness of toxicity detection systems against visually obfuscated inputs. We demonstrate that ToxASCII achieves perfect attack success rates (ASR) on various state-of-the-art large-scale language models and dedicated moderation tools, exposing a serious vulnerability in current text-only moderation systems.