[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

RAG-based Architectures for Drug Side Effect Retrieval in LLMs

Created by
  • Haebom

Author

Shad Nygren, Pinar Avci, Andre Daniels, Reza Rassol, Afshin Beheshti, Diego Galeano

Outline

In this paper, we propose a novel approach to utilize LLM for adverse drug reaction detection and analysis, despite the __T2606__ of large-scale language models (LLMs). To address the problems of traditional LLMs, such as black-box learning data dependency, hallucination phenomenon, and lack of domain-specific knowledge, we propose two architectures, RAG and GraphRAG, that integrate comprehensive adverse drug reaction knowledge into the Llama 3 8B language model. Experimental results using a dataset of 19,520 adverse drug reaction associations show that GraphRAG achieves near-perfect accuracy in adverse drug reaction detection. This provides an accurate and scalable solution, which represents a significant advancement in the application of LLMs in the field of pharmacovigilance.

Takeaways, Limitations

Takeaways:
Suggesting the possibility of improving the accuracy of drug side effect detection and analysis using LLM
Confirmation of the possibility of building an efficient and accurate drug side effect information retrieval system through GraphRAG architecture
Presenting new possibilities for the use of LLM in the field of drug surveillance
Contribute to building an accurate and scalable drug side effect information system
Limitations:
Further verification of the generalization performance of the proposed model is needed.
Additional research is needed on applicability and stability in real clinical settings.
Consideration should be given to the limitations and biases of the dataset used.
It is not a complete solution to the limitations of the Llama 3 8B model itself (e.g. potential for hallucinations).
👍