This paper presents Kernel-Enriched AI (KEA) Explain, a neurosymbolic framework for detecting and explaining hallucinations—syntactically valid but factually unsubstantiated statements—generated by large-scale language models (LLMs). KEA detects and explains hallucinations by comparing the knowledge graph generated from LLM output with real-world data from Wikidata or contextual documents. It uses graph kernels and semantic clustering to provide explanations for hallucinations, ensuring robustness and interpretability. It achieves competitive accuracy in hallucination detection across both open- and closed-domain tasks, while generating contrastive explanations to enhance transparency. This enhances the reliability of LLMs in high-stakes domains and lays the foundation for future research on precision enhancement and multidisciplinary knowledge integration.