Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Modular Delta Merging with Orthogonal Constraints: A Scalable Framework for Continual and Reversible Model Composition

Created by
  • Haebom

Author

Haris Khan, Sadia Asif, Shumaila Asif

Outline

In real-world machine learning deployments, models must be continuously updated, constructed, and selectively discarded as needed. However, existing model merging and continuous learning approaches often suffer from task interference, catastrophic forgetting, or lack of reversibility. In this paper, we propose Orthogonally Constrained Modular Delta Merging (MDM-OC), a novel framework that enables scalable, interference-free, and reversible construction of fine-tuned models. Each task-specific model is encoded as a delta from a shared basis and projected into an orthogonal subspace to eliminate conflicts. These projected deltas are then merged via gradient-based optimization to form a unified model that maintains performance across all tasks. This approach supports continuous integration of new models, structural separation for compliance with regulations such as GDPR requirements, and model stability through resilient weight merging and synthetic regeneration. Extensive experiments on vision and natural language processing benchmarks demonstrate that MDM-OC outperforms previous baselines in accuracy, transferability, and separation fidelity while remaining memory-efficient and computationally tractable. This framework provides a principled solution for designing modular and compliant AI systems.

Takeaways, Limitations

Takeaways:
Provides a scalable, non-intrusive, and reversible fine-tuning model construction framework.
Support for structural model separation to comply with regulations such as GDPR
Improving model stability through elastic weight integration and synthetic replay.
Outperforms existing methods in vision and natural language processing tasks (accuracy, inversion, separation fidelity)
Memory efficient and computationally easy to handle
Limitations:
This paper does not explicitly mention the specific Limitations. Further experiments and application to various datasets are needed to further verify generalization performance. Further research may reveal issues such as dependence on specific hardware environments and limitations in scalability.
👍