This paper addresses the problem of ensuring stable and rule-compliant behavior of autonomous agents in uncertain environments. We present a powerful solution to this problem by integrating probabilistic and symbolic white-box inference models with deep learning methods using neuro-symbolic systems. It combines the strengths of structural inference with the advantages of flexible representation by simultaneously considering neural network models trained with explicit rules and noisy data. To this end, we introduce Constitutional Controller (CoCo), a novel framework designed to enhance the safety and reliability of agents through deep probabilistic logic programs that represent constraints such as shared traffic spaces. We also propose a concept of self-doubt, implemented as a probability density conditioned on suspicion features such as moving speed, sensors in use, or health status. Through real-world aerial mobility studies, we demonstrate that CoCo is beneficial for intelligent autonomous systems to learn appropriate doubts and navigate complex and uncertain environments safely and orderly.