Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Agent-Based Detection and Resolution of Incompleteness and Ambiguity in Interactions with Large Language Models

Created by
  • Haebom

Author

Riya Naik (BITS Pilani KK Birla Goa Campus), Ashwin Srinivasan (BITS Pilani KK Birla Goa Campus), Swati Agarwal (PandaByte Innovations Pvt Ltd), Estrid He (RMIT University)

Outline

This paper proposes a method to enhance the reasoning ability of LLM-based question answering system by utilizing agent-based architecture. Using LLM agent, we implement a transcription machine that automatically resolves question incompleteness or ambiguity, and implement zero-shot ReAct agent as an expert agent that detects and resolves incompleteness and ambiguity using GPT-3.5-Turbo and Llama-4-Scout. The model selects one of three actions: question classification (incomplete, ambiguous, normal), fault resolution, and answer generation, and we compare and analyze LLM with and without agent. Experimental results show that agent-based approach has the advantages of shorter interaction length, improved answer quality, and explainable resolution of question faults, but it also has the disadvantage of additional LLM calls and increased latency. However, in the test dataset, the advantages outweigh the costs except when the question already has sufficient context, suggesting that agent-based approach can be a useful mechanism for developing more powerful question answering systems.

Takeaways, Limitations

Takeaways:
Improving and strengthening the inference ability of LLM-based question-answering systems
Shortening interaction length by automatically resolving question incompleteness and ambiguity
Improved answer quality and increased explainability of the question resolution process
Agent-based approaches suggest that they are a useful mechanism for developing more robust and efficient QA systems.
Limitations:
Increased cost due to additional LLM calls (consumption of computational resources)
Increased latency in some cases
Decreased utility of the agent when the question already has sufficient context
👍