This paper proposes Information-Preserving Adaptation (IPA), a feature-aware projection framework, to address the Limitations of parameter-efficient fine-tuning (PEFT) methods such as LoRA. While LoRA uses randomly initialized dimensionality reduction, which incurs information loss, IPA explicitly preserves information in the reduced hidden space through an algorithm that approximates the main principal components. In linear cases, IPA enables efficient projector pretraining with negligible inference overhead.