Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

A Risk Taxonomy and Reflection Tool for Large Language Model Adoption in Public Health

Created by
  • Haebom

Author

Jiawei Zhou, Amy Z. Chen, Darshi Shah, Laura M. Schwab Reese, Munmun De Choudhury

Outline

To assess the potential risks associated with implementing large-scale language models (LLMs) in public health, we conducted focus group discussions with experts and practitioners on three key public health issues: infectious disease prevention (vaccines), chronic disease and well-being management (opioid use disorder), and community health and safety (intimate partner violence). Our findings led us to develop a risk classification framework that categorizes the potential risks associated with LLMs when used alongside traditional health communications across four dimensions: individual, person-centered care, information ecosystem, and technical responsibility.

Takeaways, Limitations

Takeaways:
To provide a systematic framework for identifying potential risks associated with the use of LLMs in public health.
Proposing a practical, reflective approach to assessing and mitigating risks.
Facilitating collaboration and providing a shared vocabulary between computing and public health professionals.
Provide tools to help you decide whether to use LLM and how to mitigate its harms.
Limitations:
A limited range of studies focusing on specific public health issues.
Lack of empirical evaluation of LLM's performance or real-world usage scenarios.
Further validation of the generalizability of the risk classification system is needed.
The need for continuous updates on technological advancements and the changing information environment.
👍