To overcome the limitations of conventional artificial neural networks, which rely on computationally expensive deep or overparameterized architectures, this paper proposes a new type of small-scale shallow neural network, the Barycentric Neural Network (BNN). BNNs define their structure and parameters using a fixed set of basis points and their barycentric coordinates. BNNs can accurately represent continuous piecewise linear functions (CPLFs) and guarantee strict continuity between segments. Any continuous function can be arbitrarily well approximated by CPLFs, making BNNs a flexible and interpretable tool for function approximation. Furthermore, we present a novel geometrically interpretable, stable, and scale-invariant continuous entropy variant, the Length-Weighted Persistent Entropy (LWPE). LWPE is weighted by the lifetime of topological features. Combining a BNN with an LWPE-based loss function, our framework aims to provide a flexible and geometrically interpretable approximation of nonlinear continuous functions in resource-constrained environments, such as limited basis points and training epochs. Instead of optimizing internal weights, we directly optimize the basis points that define the BNN. Experimental results demonstrate that our method achieves superior and faster approximation performance compared to existing loss functions such as MSE, RMSE, MAE, and log-cosh.