BranchNet is a neural symbolic learning framework that transforms an ensemble of decision trees into a sparse, partially connected neural network. Each branch, defined as a decision path from the root to the parent node of the leaf, is mapped to a hidden neuron, enabling gradient-based optimization while preserving the symbolic structure. The resulting model is compact and interpretable, and requires no manual architecture tuning. When evaluated on a variety of structured multi-class classification benchmarks, BranchNet consistently outperforms XGBoost in terms of accuracy, with statistically significant gains. In this paper, we detail the architecture, training procedure, and sparsity dynamics, and discuss the strengths of the model in terms of symbolic interpretability and the current Limitations where additional adaptive calibration could be beneficial for binary tasks.