In this paper, we propose a novel Frequency Dynamic Attention Modulation (FDAM) technique to solve the frequency decay problem, which is a major Limitations of Vision Transformer (ViT). FDAM consists of two techniques: Attention Inversion (AttInv), which inverts the low-pass filter characteristic of the attention mechanism, and Frequency Dynamic Scaling (FreqScale), which adjusts the frequency component weights, inspired by circuit theory. Through these techniques, we can directly adjust the frequency response of ViT to prevent the loss of details and textures, and achieve performance improvements in various models (SegFormer, DeiT, MaskDINO) and tasks (Semantic Segmentation, Object Detection, Instance Segmentation). In particular, it has achieved state-of-the-art performance in the field of remote sensing.