In real-world machine learning deployments, models must be continuously updated, constructed, and selectively discarded as needed. However, existing model merging and continuous learning approaches often suffer from task interference, catastrophic forgetting, or lack of reversibility. In this paper, we propose Orthogonally Constrained Modular Delta Merging (MDM-OC), a novel framework that enables scalable, interference-free, and reversible construction of fine-tuned models. Each task-specific model is encoded as a delta from a shared basis and projected into an orthogonal subspace to eliminate conflicts. These projected deltas are then merged via gradient-based optimization to form a unified model that maintains performance across all tasks. This approach supports continuous integration of new models, structural separation for compliance with regulations such as GDPR requirements, and model stability through resilient weight merging and synthetic regeneration. Extensive experiments on vision and natural language processing benchmarks demonstrate that MDM-OC outperforms previous baselines in accuracy, transferability, and separation fidelity while remaining memory-efficient and computationally tractable. This framework provides a principled solution for designing modular and compliant AI systems.