This paper explores the explainability of AI tools in the medical field, using QCancer, a cancer risk prediction tool, as an example. Experiments were conducted with laypeople (patients) and medical students (healthcare workers) using two explanation methods: SHAP and Occlusion-1, in chart (SC, OC) and text (OT) formats. The results showed that Occlusion-1 had higher subjective comprehension and trustworthiness than SHAP, but this was likely due to a preference for the text format (OT). In other words, the format of the explanation had a greater impact on user understanding and trust than the content itself.