A common way to convey uncertainty in large-scale language models (LLMs) is to add percentages or euphemisms to the answers. In this paper, we argue that LLMs should output a summary of all possible options and their probabilities, reflecting their internal belief distributions. To test whether LLMs possess this ability, we developed a metric called SelfReflect, which measures the information-theoretic distance between the answer distribution and the summary. Experiments show that while current LLMs fail to reveal uncertainty, sampling multiple outputs and re-introducing them into context allows them to produce faithful summaries of uncertainty.