This paper discusses Augmented Retrieval Generation (RAG), a standard framework that combines large-scale language models (LLMs) with document retrieval from external corpora for knowledge-intensive natural language processing tasks. Most RAG pipelines treat retrieval and inference as independent components, with a static design that retrieves documents once and then generates answers without further interaction. This design limits performance for complex tasks requiring iterative evidence gathering or high-precision retrieval. This paper reviews recent research in information retrieval (IR) and NLP to address this gap by introducing adaptive retrieval and ranking methods that incorporate feedback. We structurally outline improved retrieval and ranking mechanisms based on this feedback, classifying feedback signals based on their source and role in query, retrieved context, or document pool enhancement. We aim to bridge the gap between IR and NLP perspectives, emphasizing retrieval as a dynamic and learnable component of an end-to-end RAG system.