This paper addresses concerns about trust, oversight, safety, and human dignity stemming from the opacity of modern machine learning models. While explainability methods aid model understanding, it remains challenging for developers to design explanations that are both understandable and effective for their intended audience. A large-scale experiment with 124 participants examined how developers provide explanations to end users, the challenges they face, and the extent to which specific policies guide their behavior. Results revealed that most participants struggled to generate high-quality explanations and adhere to the provided policies, with the nature and specificity of the policy guidance having little impact on effectiveness. We argue that this stems from a failure to imagine and anticipate the needs of non-technical stakeholders, and recommend educational interventions based on cognitive process theory and social imagination.