This paper proposes a meta-planning optimization (MPO) framework to improve the interactive planning performance of agents based on large-scale language models (LLMs). Unlike existing methods, which suffer from planning hallucinations and require retraining for each new agent, MPO directly integrates explicit guidance through meta-planning to enhance the agent's planning ability. Unlike existing methods that rely on complex knowledge, MPO leverages high-level, general guidance and continuously optimizes the meta-plan based on feedback from the agent's task execution. Experimental results on two representative tasks demonstrate that MPO significantly outperforms existing baseline models, providing a plug-and-play solution that enhances task completion efficiency and generalization across new scenarios.