Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

BranchNet: A Neuro-Symbolic Learning Framework for Structured Multi-Class Classification

Created by
  • Haebom

Author

Dalia Rodr iguez-Salas, Christian Riess

Outline

BranchNet is a neural symbolic learning framework that transforms an ensemble of decision trees into a sparse, partially connected neural network. Each branch, defined as a decision path from the root to the parent node of the leaf, is mapped to a hidden neuron, enabling gradient-based optimization while preserving the symbolic structure. The resulting model is compact and interpretable, and requires no manual architecture tuning. When evaluated on a variety of structured multi-class classification benchmarks, BranchNet consistently outperforms XGBoost in terms of accuracy, with statistically significant gains. In this paper, we detail the architecture, training procedure, and sparsity dynamics, and discuss the strengths of the model in terms of symbolic interpretability and the current Limitations where additional adaptive calibration could be beneficial for binary tasks.

Takeaways, Limitations

Takeaways:
We present a new model that combines the advantages of decision tree ensembles (interpretability) and neural networks (optimization efficiency).
Achieves higher accuracy than XGBoost.
Create compact, interpretable models without the need for manual architecture tuning.
Limitations:
Performance may be somewhat poor on binary classification tasks.
Additional adaptive calibration may be required to improve performance on binary classification tasks.
👍