This paper focuses on relevance and usefulness, two metrics for evaluating the effectiveness of information retrieval systems, and emphasizes the importance of prioritizing highly useful results in Augmented Retrieval Generation (RAG) using large-scale language models (LLMs) with limited input bandwidth. We link RAG's three core components—relevance rankings derived from retrieval models, usefulness judgments, and answer generation—with Schutz's relevance philosophy framework, demonstrating that each component reflects three cognitive levels that enhance each other. Building on this framework, we propose an iterative usefulness judgment framework (ITEM) that improves each stage of RAG. Experiments on the TREC DL, WebAP, GTI-NQ, and NQ datasets demonstrate that ITEM significantly improves usefulness judgments, rankings, and answer generation compared to baseline models.