This paper proposes a method to enhance the reasoning ability of LLM-based question answering system by utilizing agent-based architecture. Using LLM agent, we implement a transcription machine that automatically resolves question incompleteness or ambiguity, and implement zero-shot ReAct agent as an expert agent that detects and resolves incompleteness and ambiguity using GPT-3.5-Turbo and Llama-4-Scout. The model selects one of three actions: question classification (incomplete, ambiguous, normal), fault resolution, and answer generation, and we compare and analyze LLM with and without agent. Experimental results show that agent-based approach has the advantages of shorter interaction length, improved answer quality, and explainable resolution of question faults, but it also has the disadvantage of additional LLM calls and increased latency. However, in the test dataset, the advantages outweigh the costs except when the question already has sufficient context, suggesting that agent-based approach can be a useful mechanism for developing more powerful question answering systems.