In this paper, we propose a dynamic token modulation and extension (DTME-MTL) framework applicable to transformer-based MTL architectures to address the negative transfer problem that arises due to target differences between tasks in multi-task learning (MTL). To overcome the limitations of conventional fixed network capacity and architecture, DTME-MTL identifies gradient conflicts in token space and applies adaptive solutions according to the conflict type to enhance adaptability and reduce overfitting. Unlike conventional methods that replicate network parameters, it operates only in token space, enabling efficient adaptation without parameter augmentation. Experimental results demonstrate that DTME-MTL is a scalable and effective solution to improve multi-task performance with minimal computational overhead.