This paper proposes Low-Confidence Gold (LCG), a novel filtering framework for improving the efficiency of directive fine-tuning in large-scale language models. LCG identifies valuable directive pairs using centroid-based clustering and confidence-based selection. Semi-supervised learning using lightweight classifiers generates high-quality subsets while preserving data diversity. Experimental results show that a model fine-tuned on 6K samples filtered by LCG outperforms existing methods, demonstrating significant performance gains on MT-bench and consistent performance gains across comprehensive evaluation metrics. The effectiveness of this framework in improving efficiency while maintaining model performance suggests a promising direction for efficient directive fine-tuning.