This paper proposes a novel protocol to ensure the correctness of autonomous AI agents in dynamic low-trust environments. It exploits the property that correctness emerges in systems where the cost of error exposure is lower than the cost of error occurrence, and adopts a method of delegating tasks to subagents. The proposed protocol enforces correctness with collateralized claims through a recursive verification game. Tasks are published as intents, and solvers compete to perform them. The chosen solver performs the task at risk, and its correctness is verified ex post facto by a verifier. Any challenger can initiate the verification process by challenging the results, and a false agent is penalized, while a correct dissenting party is rewarded. A false verifier is also punished through higher-level verification. When the incentives of the solvers, challengers, and verifiers are aligned, the falsification condition makes correctness a Nash equilibrium.