[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Conformal Risk Control

Created by
  • Haebom

Author

Anastasios N. Angelopoulos, Stephen Bates, Adam Fisch, Lihua Lei, Tal Schuster

Outline

This paper extends conformal prediction to control the expectation of an arbitrary monotonic loss function. The algorithm generalizes split conformal prediction and its coverage guarantees. Similar to conformal prediction, the conformal risk control procedure is accurate up to $\mathcal{O}(1/n)$. In addition, we introduce distribution shift, quantile risk control, multi- and adversarial risk control, and an extension of the idea of the expectation of the U-statistic. We demonstrate the use of the algorithm to limit the false negative rate, graph distance, and token-level F1 score with real-world examples from computer vision and natural language processing.

Takeaways, Limitations

Takeaways:
We present a generalized method for adaptive forecasting that controls the expectation for arbitrary monotonic loss functions.
It is applicable to a wider range of loss functions while maintaining the coverage guarantee of split-adaptive prediction.
It provides scalability applicable to various situations such as distribution movement, quantile risk control, and multi- and adversarial risk control.
It demonstrates practical applicability in the fields of computer vision and natural language processing.
Limitations:
Accuracy up to $\mathcal{O}(1/n)$ coefficients is still an approximation, and may be less accurate for small sample sizes.
The performance of an algorithm may be affected by assumptions about the underlying distribution.
Various extensions can increase the complexity of theoretical analysis.
👍