This paper analyzes how a language model (LM) answers one-to-many factual questions (e.g., a list of cities in a given country). Using a variety of datasets, models, and prompts, we demonstrate that the LM uses a "promote-then-suppress" mechanism to first recall all answers and then suppress already generated ones. Specifically, the LM uses both the subject and previous answer tokens to recall knowledge, with attention propagating subject information and the multi-level neural network (MLP) promoting answers. Attention then suppresses previous answer tokens by attending to them, and the MLP amplifies the suppression signal. We demonstrate this mechanism through experimental evidence using token lens and knockout techniques. Ultimately, we provide new insights into how the internal components of the LM interact with various input tokens to support complex factual recall.