Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

RAGtifier: Evaluating RAG Generation Approaches of State-of-the-Art RAG Systems for the SIGIR LiveRAG Competition

Created by
  • Haebom

Author

Tim Cofala, Oleh Astappiev, William Xion, Hailay Teklehaymanot

Outline

This paper reports the results of a Retrieval-Augmented Generation (RAG) model submitted to the 2025 SIGIR LiveRAG Challenge. Using DataMorgana QA pairs, we explored RAG solutions that maximize accuracy using LLM with up to 10B parameters and Falcon-3-10B. After experimenting with various retriever combinations and RAG solutions leveraging OpenSearch and Pinecone indices, we selected InstructRAG, which uses the Pinecone retriever and BGE reranker, as the final solution. This solution achieved an accuracy score of 1.13 and a fidelity score of 0.55 in non-human evaluation, placing it third overall.

Takeaways, Limitations

Takeaways: We demonstrate that an InstructRAG-based RAG system utilizing the Pinecone retriever and BGE reranker performs well on the LiveRAG challenge. We validate the effectiveness of optimization strategies by exploring various retriever combinations and RAG solutions.
Limitations: Because this performance evaluation was conducted within challenging conditions (e.g., 10B-parameter LLM constraints, Falcon-3-10B usage restrictions), further validation of generalization performance is required. Reliance on non-human evaluation may be a limitation. Further research is needed to achieve higher accuracy and fidelity.
👍