This paper presents a structured approach to assessing the risks associated with applying large-scale language models (LLMs) to public health. Focus group interviews were conducted with public health professionals and individuals with practical experience, focusing on three key public health issues: infectious disease prevention (vaccines), chronic disease and well-being management (opioid use disorder), and community health and safety (intimate violence). This approach identified concerns regarding the use of LLMs. This resulted in a risk classification system encompassing four dimensions—individual, person-centered care, information ecosystems, and technology accountability—and proposed a reflective approach to risk assessment by providing specific risks and reflection questions for each dimension. This paper reexamines existing information behavior models and emphasizes the need to incorporate external validity and domain expertise into assessments based on real-world experience and practices. Ultimately, this study provides a shared vocabulary and reflective tools for computing and public health professionals to collaboratively anticipate, evaluate, and mitigate the potential harms of LLMs.