We present a new benchmark, DischargeSim, which evaluates the ability of large-scale language models (LLMs) to serve as personalized discharge educators after patient visits. It simulates multi-turn post-visit conversations between LLM-based DoctorAgents and PatientAgents with diverse psychosocial profiles (e.g., health literacy, education, and emotional intelligence). Interactions are structured across six clinically relevant discharge topics and evaluated along three axes: conversational quality through automated and LLM-as-judge assessments; personalized document generation, including free-text summaries and structured AHRQ checklists; and patient understanding through downstream multiple-choice testing. Experimental results across 18 LLMs reveal significant variation in discharge education performance, with performance significantly varying across patient profiles. Specifically, model size does not always lead to better educational outcomes, highlighting the trade-off between strategy use and content prioritization. DischargeSim represents a first step toward benchmarking LLMs in post-visit clinical education and promoting equitable and personalized patient support.