Neural language models are black boxes, with linguistic patterns and factual knowledge distributed across numerous opaque parameters. This entangled encoding makes it difficult to reliably inspect, verify, or update specific facts. In this paper, we introduce Limited Memory Language Models (LMLM), which store factual knowledge externally in an external database rather than memorizing it during pretraining. Through a pretraining approach, the authors strategically mask externally retrieved factual values from the training loss, allowing the model to learn to perform target lookups rather than relying on model weights. Experimental results demonstrate that LMLM achieves competitive performance on standard benchmarks compared to much larger LLMs, while offering the advantage of an explicit, editable, and verifiable knowledge base.