The Private Aggregation of Teacher Ensembles (PATE) framework enables privacy-preserving machine learning by aggregating responses from disjoint subsets of sensitive data. Adapting PATE to tasks with inherent output diversity, such as text generation, faces a key tension: as diversity increases, the agreement between samples from different teachers decreases, but this decrease in agreement reduces the utility of the model under the same privacy requirements. However, artificially suppressing diversity to increase agreement is undesirable, as it distorts the output of the underlying model and degrades output quality. In this paper, we propose Hot PATE, a variant of PATE designed for diverse generative settings. We formalize the concept of a diversity-preserving ensemble sampler and introduce an efficient sampler that transfers diversity without additional privacy costs. Hot PATE requires only API access to a proprietary model and can replace the existing Cold PATE sampler. Experimental evaluations demonstrate and quantify the benefits, demonstrating that the proposed method significantly improves the privacy-utility trade-off in both preserving diversity and returning relevant responses in the evaluated in-context learning tasks.