To overcome the limitations of Markov Decision Processes (MDPs), we propose Structurally Decomposed MDP (SD-MDP) utilizing causal structures. SD-MDP partitions the temporal causal graph of an MDP into independent components, achieving dimensionality reduction and computational efficiency. For resource allocation problems, it reduces the optimization problem to a fractional knapsack problem with log-linear complexity of O(T log T) for the time horizon T. This reduces the complexity to O(T log T), which is superior to conventional probabilistic programming methods with polynomial complexity, and it is valid in high-dimensional spaces regardless of the size of the state-action space. Furthermore, SD-MDP integrates with Monte Carlo Tree Search (MCTS) to achieve higher expected rewards under limited simulation budgets and eliminates the simple regret boundary. It demonstrates superior policy performance compared to benchmarks in logistics and finance.