This paper addresses the hallucination problem in large-scale vision-language models (LVLMs), especially the relation hallucination problem. Unlike previous works that mainly focus on the hallucination of objects themselves, this paper presents a unified framework that considers both objects and relations simultaneously. To this end, we propose a new benchmark, Tri-HE, that evaluates hallucination using (object, relation, object) triplets. Experimental results on Tri-HE show that relation hallucination is a more serious problem than object hallucination, and we present a simple training-free approach to mitigate it. The dataset and code are publicly available.