To address the global spread of misinformation and concerns about content credibility, this paper proposes an explainable fact-checking framework that goes beyond content analysis and integrates social media dynamics, such as "likes" and user networks. Rather than simply labeling content as false, it enhances fact-checking by combining content, social media, and graph-based features, and integrates explainability techniques to provide complete and interpretable insights that support classification decisions. Experiments on English, Spanish, and Portuguese datasets demonstrate that multimodal information improves performance over unimodal information. Using a novel protocol, we evaluate the interpretability, reliability, and robustness of the framework's explanations, demonstrating that it effectively generates human-understandable predictive evidence.