Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Minimizing Surrogate Losses for Decision-Focused Learning using Differentiable Optimization

Created by
  • Haebom

Author

Jayanta Mandi, Ali Irfan Mahmuto\u{g}ullar{\i}, Senne Berden, Tias Guns

Outline

This paper addresses the limitations of gradient-based DFL for optimization problems such as linear programming (LP) in decision-driven learning (DFL). Existing gradient-based DFL approaches utilize two approaches: smoothing the LP or minimizing the surrogate loss. However, the authors demonstrate that the former approach still results in zero gradients. Therefore, the authors propose minimizing the surrogate loss even using differentiable optimization layers. Experimental results demonstrate that differentiable optimization layers, through surrogate loss minimization, achieve similar or better regrets than existing surrogate loss-based DFL methods. Specifically, we demonstrate that DYS-Net, a recently proposed differentiable optimization technique for LP, can significantly reduce training time while achieving state-of-the-art regrets by minimizing the surrogate loss.

Takeaways, Limitations

Takeaways:
Clarify Limitations of slope-based DFL for LP.
We prove that surrogate loss minimization is effective even when using differentiable optimization layers.
We demonstrate that state-of-the-art performance can be achieved while reducing training time using DYS-Net.
Limitations:
The proposed method is limited to LP. Further research is needed to determine whether it can be generalized to other types of optimization problems.
The effectiveness of DYS-Net may depend on the specific problem and implementation. Further comparative analysis with other differentiable optimization techniques is needed.
👍