This paper presents a novel robotic control framework for long-duration object manipulation. Given that existing learning-based approaches rely on large, task-specific datasets and struggle to generalize to unknown scenarios, this study proposes a closed-loop framework that utilizes a large-scale language model (LLM) to generate directly executable code plans, rather than relying on pre-trained low-level controllers. The LLM generates robust and generalizable task plans through a few iterations of learning guided by the Course of Thought (CoT) and progressively structured examples. A reporter using RGB-D evaluates the results and provides structured feedback, enabling error correction and replanning under partial observation. This eliminates step-by-step inference, reduces computational overhead, and limits error accumulation observed in previous methods. It achieves state-of-the-art performance on over 30 diverse long-duration tasks, both known and unknown, in cluttered real-world environments, including LoHoRavens, CALVIN, Franka Kitchen, and others.