This paper presents a hierarchical Safe Reinforcement Learning (Safe RL) framework for ethical decision-making in autonomous vehicles. This framework is designed around a Safe RL agent that generates high-level action goals using ethical risk costs, which combine crash probability and damage severity. It leverages a dynamic prioritized experience replay mechanism to enhance learning about rare but critical high-risk events, and generates smooth, feasible trajectories through polynomial path planning and PID and Stanley controllers. Training and validation using a real-world traffic dataset demonstrates superior performance compared to existing methods in terms of ethical risk reduction and driving performance maintenance. Notably, this is the first Safe RL study evaluating ethical decision-making in autonomous vehicles in a real-world, mixed-traffic scenario.