This paper highlights the problem that existing parameter-efficient fine-tuning (PEFT) methods learn new low-rank or sparse weights in parallel with pre-trained weights ($W$), but learn these weights from scratch, resulting in a performance gap. To address this, we propose a novel parameterization method, VectorFit. VectorFit efficiently leverages existing knowledge inherent in $W$ to adaptively learn singular vectors and biases, thereby generating a high-rank incremental weight matrix $\Delta W$, similar to full fine-tuning. Through experiments on 19 diverse language and vision tasks (including natural language understanding and generation, question answering, image classification, and image generation), we demonstrate that VectorFit achieves superior performance compared to existing PEFT methods with nine times fewer learnable parameters.