This paper proposes a government-led mandatory compensation program for AI developers as a solution to the risks that AI systems can incur that cannot be covered by insurance, especially ontological risks. The program induces a socially optimal level of attention through risk-based compensation and assesses risks through expert surveys (using the Bayesian truth-induction mechanism). The collected compensation is used to fund developers’ safety research, and for this purpose, we propose a quadratic financing method that combines developers’ private contributions and public funds. This method has the advantage of effectively utilizing private information and clearly providing developers with a risk mitigation direction compared to existing alternatives.