This paper addresses the limitations of gradient-based DFL for optimization problems such as linear programming (LP) in decision-driven learning (DFL). Existing gradient-based DFL approaches attempt to address this issue in one of two ways: (a) smoothing the LP problem by adding quadratic regularization, thereby making it differentiable, or (b) minimizing a surrogate loss with information-rich (sub)gradients. However, this paper shows that approach (a) still suffers from the problem of zero gradients even when smoothed. Therefore, this paper proposes minimizing the surrogate loss even using differentiable optimization layers. Experimental results demonstrate that differentiable optimization layers achieve comparable or better regrets compared to existing surrogate-loss-based DFL methods through surrogate loss minimization. Specifically, we demonstrate that minimizing the surrogate loss using DYS-Net can achieve state-of-the-art regrets while significantly reducing training time.