This paper proposes the Progressive Prompt Decision Transformer (P2DT) as a solution to the critical forgetting problem, which causes performance degradation when encountering new tasks in intelligent agents controlled by large-scale models. P2DT promotes task-specific policies by dynamically adding decision tokens during new task learning, enhancing the Transformer-based model. This mitigates forgetting in both continuous and offline reinforcement learning scenarios. Furthermore, P2DT utilizes trajectories collected through existing reinforcement learning across all tasks and generates new task-specific tokens during learning, preserving knowledge from previous learning. Initial results demonstrate that this model effectively mitigates critical forgetting and scales well in an increasing task environment.