MetaExplainer is a neural symbolic framework that generates user-centered explanations. It generates natural language explanations tailored to user questions through a three-step process: question decomposition using LLM, system recommendation generation using model explanation methods, and explanation output summarization. It leverages an explanation ontology to guide LLM and explanation methods, and supports various explanation types (contrastive, counterfactual, evidence-based, case-based, and data-based). Evaluation results using the PIMA Indian diabetes dataset demonstrated a question reconstruction F1-score of 59.06%, model explanation fidelity of 70%, and natural language synthesis context utilization of 67%. User studies confirmed the creativity and comprehensiveness of the generated explanations.