This paper proposes a novel method for solving a monotonic partially observable Markov decision process (POMDP) with multiple components within a limited budget. Monotonic POMDPs are well-suited for modeling systems in which the state gradually decays and persists until a repair action is taken, and are particularly effective for sequential repair problems. Existing methods suffer from computational difficulties due to the exponential growth of the state space as the number of components increases. This paper presents a two-step approach to address this issue. First, we approximate the optimal value function of each component POMDP with a random forest model to efficiently allocate the budget to each component. Next, we use an oracle-guided meta-learning approximate policy optimization (PPO) algorithm to solve each independent, budget-constrained single-component monotonic POMDP. The oracle policy is obtained through value iteration over the corresponding monotonic Markov decision process (MDP). We demonstrate the effectiveness of the proposed method by considering a real-world inspection and repair scenario of an administrative building, and demonstrate its scalability by analyzing the computational complexity as a function of the number of components.