This paper examines the growing adoption of explainable AI (XAI) in AI-based decision support systems (DSS) in the construction industry. We address the lack of evidence integration to support the trustworthiness and accountability of AI-generated results, and present a theoretical and evidence-based ends-means framework developed through a narrative literature review to address this. This framework provides an epistemological foundation for the design of XAI-enabled DSSs that generate meaningful explanations tailored to users' knowledge needs and decision contexts, focusing on assessing the strength, relevance, and utility of various types of evidence supporting AI-generated explanations. While developed with construction professionals as the primary end-users, it is also applicable to those with diverse epistemological objectives, such as developers, regulators, and project managers.