This paper leverages the ability of artificial neural networks to approximate continuous functions, aiming to address the challenges of existing approaches that require excessive parameters and high computational costs. To achieve this, we present the Barycentric Neural Network (BNN), a compact, shallow architecture that encodes structure and parameters through fixed anchor points and their corresponding median coordinates. BNNs accurately represent continuous piecewise linear functions (CPLFs), and by leveraging CPLF's ability to uniformly approximate any continuous function in a compact domain, we present a flexible and interpretable function approximation tool. Furthermore, to improve geometric fidelity in resource-limited environments with a small number of anchor points or limited training epochs, we propose Length-Weighted Persistent Entropy (LWPE), a robust variant of Persistent Entropy. By optimizing anchor points rather than BNN-internal parameters using an LWPE-based loss function, we achieve superior performance and faster approximation speed compared to standard loss functions (MSE, RMSE, MAE, and LogCosh), offering a computationally efficient alternative to BNN approximation.