To address the probabilistic nature of LLM's inference process and the resulting variability, this paper proposes ReFeri, a novel framework for validating LLM outputs using few-shot examples. Unlike existing few-shot prompting methods, ReFeri utilizes few-shot examples not only for output generation but also for candidate output evaluation. It combines two scores inspired by Bayes' rule to evaluate outputs, and further LLM inference selects candidates that possess both confidence and contextual consistency.