Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Barycentric Neural Networks and Length-Weighted Persistent Entropy Loss: A Green Geometric and Topological Framework for Function Approximation

Created by
  • Haebom

Author

Victor Toscano-Duran, Rocio Gonzalez-Diaz, Miguel A. Guti errez-Naranjo

Outline

This paper leverages the ability of artificial neural networks to approximate continuous functions, aiming to address the challenges of existing approaches that require excessive parameters and high computational costs. To achieve this, we present the Barycentric Neural Network (BNN), a compact, shallow architecture that encodes structure and parameters through fixed anchor points and their corresponding median coordinates. BNNs accurately represent continuous piecewise linear functions (CPLFs), and by leveraging CPLF's ability to uniformly approximate any continuous function in a compact domain, we present a flexible and interpretable function approximation tool. Furthermore, to improve geometric fidelity in resource-limited environments with a small number of anchor points or limited training epochs, we propose Length-Weighted Persistent Entropy (LWPE), a robust variant of Persistent Entropy. By optimizing anchor points rather than BNN-internal parameters using an LWPE-based loss function, we achieve superior performance and faster approximation speed compared to standard loss functions (MSE, RMSE, MAE, and LogCosh), offering a computationally efficient alternative to BNN approximation.

Takeaways, Limitations

Takeaways:
Presenting a computationally efficient function approximation alternative: We propose a possibility of function approximation suitable for low-resource environments by solving the problems of excessive parameters and computational cost.
Interpretability: Enables an intuitive understanding of the structure and operation of BNNs.
Improved geometric fidelity: Leverage LWPE to achieve superior performance even in low-resource environments.
Superior performance: Demonstrated improved approximation performance compared to existing loss functions.
Limitations:
Limitations of shallow architectures: Shallow architectures may have limitations in expressing complex functions.
CPLF approximation limits: Although all continuous functions can be uniformly approximated by CPLF, the accuracy of the approximation for a particular function may vary depending on the CPLF representation.
Complexity of benchmark optimization: The benchmark optimization process can be complex, and finding the optimal benchmark can be difficult.
👍