This paper presents ReasonBridge, a novel methodology to bridge the performance gap between closed and open-source models in large-scale language models (LLMs) that require complex inference and precise instruction compliance. ReasonBridge efficiently transfers the inference capabilities of powerful closed models to open-source models through a hierarchical knowledge distillation framework. Using a dataset of 1,000 curated inference traces, Reason1K, we filter inference traces extracted from various domains through a multi-criteria selection algorithm. The methodology integrates a hierarchical distillation process, a sparse inference-centric adapter architecture that requires only a small amount of additional learning parameters (0.3%), and a test-time compute scaling mechanism that uses guided inference intervention. Experimental results show that ReasonBridge significantly improves the inference capabilities of open-source models by up to 23% on benchmark tasks, significantly reducing the performance gap with closed models. In particular, the improved Qwen2.5-14B outperforms Claude-Sonnet3.5 on MATH500 and is on par with the AIME problem. This methodology generalizes effectively across a variety of inference domains and model architectures, and presents a sample-efficient approach to improving inference for instruction compliance.