Understanding the semantic representation of large language models (LLMs) is crucial for interpretability and architectural innovation. This paper challenges the conventional view that trainable input embeddings serve as fundamental "semantic vectors." In this study, we constructed a Transformer model using fixed, nonsemantic, precomputed visual embeddings derived from the visual structure of Unicode glyphs, rather than the data itself. These models outperformed models using trainable embeddings on the MMLU inference benchmark. We attribute this to "representational interference," where the embedding layer in existing models is burdened by the burden of learning both structural and semantic features. Our findings suggest that high-level semantics are not inherent in the input embeddings, but rather a property resulting from the Transformer's compositional architecture and data scale.