This paper proposes a novel Sparse Optimization (SO) framework to address overfitting and computational constraints encountered in the adaptation of Vision-Language Models (VLMs) to new domains. Unlike existing low-dimensional reparameterization methods, SO leverages the high-dimensional sparsity of parameters to dynamically update only a small number of parameters. Specifically, it introduces two paradigms: "local sparsity and global density" and "local randomness and global importance" to mitigate overfitting and ensure stable adaptation in low-data environments. Experimental results on 11 diverse datasets demonstrate that SO achieves state-of-the-art few-shot adaptation performance while reducing memory overhead.