[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

The Constitutional Controller: Doubt-Calibrated Steering of Compliant Agents

Created by
  • Haebom

Author

Simon Kohaut, Felix Divo, Navid Hamid, Benedict Flade, Julian Eggert, Devendra Singh Dhami, Kristian Kersting

Outline

This paper addresses the problem of ensuring stable and rule-compliant behavior of autonomous agents in uncertain environments. We present a powerful solution to this problem by integrating probabilistic and symbolic white-box inference models with deep learning methods using neuro-symbolic systems. It combines the strengths of structural inference with the advantages of flexible representation by simultaneously considering neural network models trained with explicit rules and noisy data. To this end, we introduce Constitutional Controller (CoCo), a novel framework designed to enhance the safety and reliability of agents through deep probabilistic logic programs that represent constraints such as shared traffic spaces. We also propose a concept of self-doubt, implemented as a probability density conditioned on suspicion features such as moving speed, sensors in use, or health status. Through real-world aerial mobility studies, we demonstrate that CoCo is beneficial for intelligent autonomous systems to learn appropriate doubts and navigate complex and uncertain environments safely and orderly.

Takeaways, Limitations

Takeaways:
A proposal to improve the safety and reliability of autonomous agents using neural symbol systems
Supporting compliance and safe decision-making through the CoCo framework
Managing uncertainty and increasing safety by introducing the concept of self-doubt
Presentation of empirical results through actual air mobility studies
Limitations:
Further research is needed on the generalization performance and scalability of the CoCo framework.
Applicability verification for various environments and agent types is required.
Further research is needed on quantitative measurement and intervention strategies for self-doubt.
Research is needed on handling exceptional situations that may occur in real environments.
👍