This paper presents MaCP (Minimal yet Mighty Adaptive Cosine Projection), a novel adaptive method for fine-tuning large-scale base models. MaCP aims to achieve superior performance while using minimal parameters and memory. It is based on the idea of leveraging the superior energy compression and de-correlation properties of cosine projection to improve both model efficiency and accuracy. Specifically, we project the weight changes from low-dimensional adaptation into the discrete cosine space, partition the weight changes across different levels of the discrete cosine spectrum, and then select the most significant frequency components from each partition. Through experiments on a wide range of unimodal tasks (e.g., natural language understanding, natural language generation, text summarization) and multimodal tasks (e.g., image classification, video understanding), we demonstrate that MaCP consistently delivers superior accuracy, significantly reduced computational complexity, and lower memory requirements compared to existing alternatives.