This paper presents an efficient method for reusing previously trained tasks in new Foundation model releases. To address the issue of parameter space mismatch between models when reusing existing parameter changes (task vectors), we focus on the gradient code structure of the new model. We propose a novel method, GradFix, which approximates the ideal gradient code structure using only a small number of labeled samples and uses this to transfer knowledge. GradFix adapts by computing a few gradients in the target model and masking the source task vectors, without additional fine-tuning. This effectively rebases the task vectors onto the new pretraining by generating updates locally aligned to the target loss gradient. Theoretically, our method guarantees first-order descent and demonstrates performance improvements that outperform existing methods on vision and language benchmarks.