This paper evaluates the ability of a large-scale language model with pretraining data similar to that of humans to learn and understand rare grammatical phenomena. Specifically, we test the language model's knowledge of both form and meaning, using the English "LET-ALONE" phrase as a target. Evaluating the language model using artificial benchmarks reveals that while it learns form well, it fails to generalize to meaning.