This paper presents a versatile, zero-resource hallucination detection framework for tackling the hallucination problem in large-scale language models (LLMs). It leverages various uncertainty quantification (UQ) techniques, including black-box UQ, white-box UQ, and LLM-as-a-Judge, by converting them into standardized, response-level confidence scores ranging from 0 to 1. A tunable ensemble approach that combines multiple individual confidence scores is proposed, allowing optimization for specific use cases. The Python toolkit UQLM simplifies the implementation, and experiments on several LLM question-answering benchmarks demonstrate that the ensemble approach outperforms both individual components and existing hallucination detection methods.