This paper proposes Logic Augmented Generation (LAG), a novel paradigm inspired by Descartes' methodological thinking, to address the problem of hallucinations that arise when large-scale language models (LLMs) perform knowledge-intensive tasks. LAG decomposes complex questions into atomic subquestions, ordered by logical dependencies, and solves them sequentially, leveraging previous answers to guide contextual retrieval for subsequent subquestions. Furthermore, it integrates a logical termination mechanism that halts inference when an unanswerable subquestion is encountered, preventing error propagation and reducing unnecessary computation. Finally, it synthesizes all subsolutions to generate a validated answer. Experimental results on four benchmark datasets demonstrate that LAG improves the robustness of inference, reduces hallucinations, and aligns the LLM's problem-solving approach with human cognition. This presents a principled alternative to existing RAG systems.