This paper proposes HERGC, a novel multimodal knowledge graph completion (MMKGC) framework that leverages large-scale language models (LLMs) to address the incompleteness of multimodal knowledge graphs (MMKGs) that integrate diverse modalities (e.g., images and text). HERGC enriches and fuses multimodal information with heterogeneous expert representation searchers to retrieve a candidate set for each incomplete triple, and then accurately identifies the correct answer using a generative LLM predictor implemented via in-context learning or lightweight fine-tuning. Extensive experiments on three standard MMKG benchmarks demonstrate that HERGC outperforms existing methods.