This paper proposes Faster Parameter-Efficient Tuning (FPET), a novel method that improves the inference speed and training efficiency of Parameter-Efficient Tuning (PET). Existing PET methods suffer from the inherent inference latency of large-scale base models and the computational overhead associated with additional modules. FPET introduces a plug-and-play token redundancy reduction module specifically designed for PET, refines tokens in the self-attention layer, and removes tokens using a fully differentiable token merging strategy. This achieves faster inference speed and higher memory efficiency while maintaining performance comparable to existing PET methods.