Chain-of-Thought (CoT) inference has improved the performance and transparency of language models, but it can degrade performance and reliability by incorporating incorrect statements. This paper proposes adding a latent truth variable to each inference step of CoT. Furthermore, we introduce Veracity Search (VS), a discrete search algorithm for truth assignment, to efficiently explore the expanded space. VS utilizes the joint likelihood of the language model's truth and the final answer as a proxy reward, performing difficult inference on the posterior distribution of latent truth values. This efficient inference-time verification method facilitates the fine-tuning of supervised learning for the Amortized Veracity Inference (AVI) machine and provides pseudo-labels for truth. AVI generalizes VS to enable accurate zero-shot truth inference in novel contexts. Experimental results demonstrate that VS reliably identifies errors on logical (ProntoQA), mathematical (GSM8K), and commonsense (CommonsenseQA) inference benchmarks, while AVI achieves comparable zero-shot accuracy. Finally, we demonstrate that potential truth inference is useful for providing feedback during self-correction and self-improvement processes.