This paper proposes a Decision Model and Notation (DMN)-based prompting framework, leveraging the potential of large-scale language models (LLMs) for automating decision logic in knowledge-intensive processes. It is designed to decompose complex decision logic into small, manageable components, guiding the LLM along a structured decision path. Experiments were conducted to apply this framework to assignment submission and feedback processes in graduate classes, demonstrating superior performance compared to chain-of-thought (CoT) prompting, and student surveys also confirmed its high usability.