This paper studies the extreme language generation problem presented by Kleinberg and Mullainathan [KM24]. The algorithm of [KM24] can generate extremely over all countable languages, but it lacks breadth. In this paper, we characterize the generative potential of the original notion of breadth and its extensions, and give lower bounds on various performance metrics (e.g., perplexity, hallucination rate, etc.). We also prove unconditional lower bounds on stable generators, showing that breadth generation becomes difficult when stability is required. Finally, we highlight the complex interplay between breadth, stability, and consistency, and show the difference in approximate breadth generation between stable and unstable generators.