This paper presents a novel security architecture, Contextual Integrity Verification (CIV), to address the vulnerability of large-scale language models (LLMs) to prompt injection and related jailbreak attacks. CIV works by attaching a cryptographically signed source label to each token and enforcing a source trust hierarchy within the Transformer via a pre-softmax hard attention mask. This ensures deterministic non-interference between tokens in the frozen model and prevents low-trust tokens from influencing high-trust representations. Experimental results demonstrate that CIV achieves a 0% attack success rate on benchmarks based on state-of-the-art prompt injection attack vectors, while maintaining 93.1% token similarity and exhibiting no degradation in model perplexity under normal operation. Application results on Llama-3-8B and Mistral-7B are also presented, and a reference implementation, an automated verification tool, and the Elite-Attack corpus are made public to support reproducible research.