In this paper, we propose a frequency dynamic attention modulation (FDAM) technique to solve the frequency loss problem, which is a major Limitations of vision transformers (ViTs). The existing attention mechanism of ViTs acts as a low-pass filter, which causes the loss of detailed information and texture, whereas FDAM directly modulates the frequency response of ViTs through two techniques: attention inversion (AttInv), which generates high-frequency filtering by inverting the attention matrix, and frequency dynamic scaling (FreqScale), which weights various frequency components. It demonstrates performance improvement in various models such as SegFormer, DeiT, and MaskDINO in tasks such as semantic segmentation, object detection, and instance segmentation, and achieves state-of-the-art performance in the field of remote sensing detection in particular.