In this paper, we propose Concentration-Adapted Perturbations (CAP), a variational framework that models weight uncertainty directly on the unit hypersphere using the von Mises-Fisher distribution to address the problem that existing variational inference methods that use isotropic Gaussian approximations in the weight space of neural networks do not fit well with the intrinsic geometry of neural networks. Building on recent work on radial-directional posterior decomposition and spherical weight constraints, CAP provides the first complete theoretical framework that connects directional statistics to practical noise regularization in neural networks. The key contribution is an analytical derivation that connects the vMF concentration parameter to the activation noise variance, allowing each layer to learn an optimal uncertainty level via a novel closed-form KL divergence regularizer. On CIFAR-10 experiments, CAP significantly improves model calibration, reducing the expected calibration error by a factor of 5.6, while providing interpretable layer-wise uncertainty profiles. CAP requires minimal computational overhead and is seamlessly integrated into standard architectures, providing a theoretically grounded and practical approach to uncertainty quantification in deep learning.