This paper analyzes the under-utilization of Bayesian neural networks (BNNs) due to the inconsistency of the standard Gaussian posterior probability distribution with the network geometry, the instability of the KL term in high dimensions, and the unreliable uncertainty correction despite the increased implementation complexity. We revisit the problem from a regularization perspective and model uncertainty using the von Mises-Fisher posterior probability distribution, which depends only on the weight direction. This yields a single, interpretable scalar per layer, the effective regularized noise ($\sigma_{\mathrm{eff}}$), which corresponds to simple additive Gaussian noise in the forward pass and allows for a compact, closed-form, dimensionally-aware KL correction. By deriving an exact closed-form approximation between the concentration $\kappa$, the activation variance, and $\sigma_{\mathrm{eff}}$, we create a lightweight, implementable variational unit that fits modern regularized architectures and improves calibration without sacrificing accuracy. Dimensionality awareness is crucial for stable optimization in high dimensions, and we show that BNNs can be principled, practical, and accurate by aligning variational posterior probabilities with the network's intrinsic geometry.