Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Transforming Wearable Data into Personal Health Insights using Large Language Model Agents

Created by
  • Haebom

Author

Mike A. Merrill, Akshay Paruchuri, Naghmeh Rezaei, Geza Kovacs, Javier Perez, Yun Liu, Erik Schenck, Nova Hammerquist, Jake Sunshine, Shyam Tailor, Kumar Ayush, Hao-Wei Su, Qian He, Cory Y. McLean, Mark Malhotra, Shwetak Patel, Jiening Zhan, Tim Althoff, Daniel McDuff, Xin Liu

Outline

This paper presents a tool-based approach that leverages code generation to derive personalized insights from wearable tracker data. We developed a Personal Health Insights Agent (PHIA) system that leverages multi-level inference, code generation, and information retrieval. Using two benchmark datasets comprising over 4,000 health insight questions, we evaluated PHIA's performance. We found that it achieved 84% accuracy for objective numerical questions and 83% positive ratings for open-ended questions, achieving the highest quality rating twice as often as existing code generation baseline models. This suggests that PHIA can enhance individuals' understanding of their data and enable more accessible and personalized data-driven health management.

Takeaways, Limitations

Takeaways:
Presenting an Effective LLM-Based Agent (PHIA) for Wearable Data Analysis
Demonstrating the potential for accurate and in-depth data analysis through multi-level inference, code generation, and information retrieval.
Achieved high performance on both objective and subjective questions (84% and 83% positive ratings)
Contributing to personalized healthcare and data-driven wellness promotion
A new benchmark dataset with over 4,000 questions released.
Limitations:
Further review of the scale and diversity of the benchmark dataset is needed.
Further research is needed on generalization performance in real-world environments.
The need for greater transparency and explainability of agents' reasoning processes.
👍