This paper highlights the need for clear, contextually relevant, and socially responsible explanations, as well as accurate predictions, to ensure the transparency and trustworthiness of AI in public health and biomedical systems. To overcome the limitations of existing XAI methodologies, we present the Public Health Argumentation and eXplainability (PHAX) framework. PHAX generates context-aware and user-tailored explanations through a multi-layered architecture that combines refutable reasoning, adaptive natural language processing (NLP), and user modeling. Through various use cases, including medical terminology simplification, patient-physician communication, and policy rationale, we demonstrate AI-driven decision support, recommendation justification, and interactive dialogue support across user types. Specifically, we demonstrate how modeling and personalizing simplification decisions as argument chains based on user expertise enhances interpretability and trustworthiness. In conclusion, PHAX contributes to the implementation of transparent and human-centered AI in public health by integrating formal reasoning methods and communication requirements.