Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Neural Logic Networks for Interpretable Classification

Created by
  • Haebom

Author

Vincent Perreault, Katsumi Inoue, Richard Labib, Alain Hertz

Outline

Traditional neural networks exhibit excellent classification performance, but suffer from limitations in verifying and extracting their learning content. This paper proposes a neural logic network (Neural Logic Network) with an interpretable structure that learns the logical mechanism between inputs and outputs through AND and OR operations. We generalize the network by adding a NOT operation and a bias that accounts for unobserved data, and develop rigorous logical and probabilistic modeling in terms of concept combinations to facilitate its use. Furthermore, we propose a novel factorized IF-THEN rule structure and a modified learning algorithm. The proposed method improves on the state-of-the-art in Boolean network discovery and can learn relevant and interpretable rules, particularly in tabular classification in medical and industrial fields where interpretability is crucial.

Takeaways, Limitations

Takeaways:
Proposing an interpretable neural network structure: Utilizing AND, OR, and NOT operations to increase the interpretability of learning content.
Improving Boolean Network Discovery Performance: Demonstrating Performance Improvement Compared to Existing Methods.
Applicability in medical and industrial fields: Suggests the possibility of learning useful rules in fields where interpretability is important.
Proposing a new IF-THEN rule structure and learning algorithm.
Limitations:
No specific mention of Limitations in the paper. (Not specified in the abstract due to lack of information on Limitations)
👍