Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

An Information-Flow Perspective on Explainability Requirements: Specification and Verification

Created by
  • Haebom

Author

Bernd Finkbeiner, Hadar Frenkel, Julian Siber

Outline

This paper focuses on the fact that explainable systems provide interacting agents with information about why certain observed effects occur. While this is a positive information flow, we argue that it must be balanced against negative information flows, such as privacy violations. Since both explainability and privacy require inferences about knowledge, we address this issue using epistemic temporal logic, which incorporates quantification of counterfactual causes. This allows us to specify that a multi-agent system provides sufficient information for agents to acquire knowledge about why certain effects occurred. This paper uses this principle to specify explainability as a system-level requirement and presents an algorithm for verifying finite-state models against this specification. We present a prototype implementation of the algorithm and evaluate it on several benchmarks, demonstrating how the proposed approach can be used to distinguish between explainable and inexplicable systems and to establish additional privacy requirements.

Takeaways, Limitations

Takeaways:
A new framework that simultaneously considers explainability and privacy is presented.
Formal and quantitative analysis possible using epistemological temporal logic and counterfactual causation
Provides an algorithm for specifying and verifying explainability as system-level requirements.
Validation of practicality through prototype implementation and benchmark evaluation
Limitations:
Lack of analysis of the computational complexity of the algorithm
Further research is needed on applicability and scalability to real systems.
Lack of comprehensive consideration of various types of explainability and privacy threats.
Lack of clear presentation of the scope of applicable systems of the proposed algorithm.
👍