This paper presents a method for incorporating external knowledge via Retrieval Augmentation Generation (RAG), which plays a fundamental role in improving large-scale language models (LLMs) for knowledge-intensive tasks. Existing RAG paradigms often overlook the cognitive step of knowledge application, leaving a gap between retrieved facts and task-specific inference. In this paper, we propose RAG+, a principled and modular extension that explicitly integrates application-aware inference into the RAG pipeline. RAG+ constructs a dual corpus of manually or automatically generated knowledge and aligned application examples, and retrieves both during inference. This design allows LLMs to access relevant information as well as apply it within a structured and goal-oriented inference process. Experiments on multiple models across mathematics, law, and medicine demonstrate that RAG+ consistently outperforms standard RAG variants, with an average improvement of 3-5% and up to 7.5% in complex scenarios. RAG+ advances a more cognitively informed framework for knowledge integration by bridging retrieval and actionable applications, and represents a step forward toward more interpretable and capable LLMs.