This paper addresses the limitations of gradient-based DFL for optimization problems such as linear programming (LP) in decision-driven learning (DFL). Existing gradient-based DFL approaches utilize two approaches: smoothing the LP or minimizing the surrogate loss. However, the authors demonstrate that the former approach still results in zero gradients. Therefore, the authors propose minimizing the surrogate loss even using differentiable optimization layers. Experimental results demonstrate that differentiable optimization layers, through surrogate loss minimization, achieve similar or better regrets than existing surrogate loss-based DFL methods. Specifically, we demonstrate that DYS-Net, a recently proposed differentiable optimization technique for LP, can significantly reduce training time while achieving state-of-the-art regrets by minimizing the surrogate loss.