In this paper, we present a novel Decision Flow (DF) framework that integrates additional guidance from the original sampler when sampling from the target distribution. DF can be viewed as an AI-based algorithmic reincarnation of the Markov Decision Process (MDP) approach in probabilistic optimal control. It extends the continuous-space, continuous-time path integral diffusion sampling technique of [Behjoo, Chertkov 2025] to discrete time and space, while generalizing the generative flow network (GFN) framework of [Bengio, et al 2021]. In its most basic form, DF exploits the linear solvability of the underlying MDP [Todorov, 2007] to adjust the transition probabilities of the original sampler, using an explicit formulation that does not require a neural network (NN). The resulting Markov process is expressed as a convolution of the inverse-time Green’s function of the original sampler and the target distribution. We demonstrate the DF framework with an example of sampling in an easing model, compare DF with Metropolis-Hastings to quantify its efficiency, discuss potential NN-based extensions, and provide an overview of how DF can improve guided sampling in various application areas.