We present an ethical decision-making framework that improves pre-trained reinforcement learning (RL) models using a task-agnostic ethical layer. After initial training, the RL model undergoes ethical fine-tuning using feedback generated from a large-scale language model (LLM). The LLM assigns belief values to recommended actions in ethical decision-making processes, based on moral principles such as consequentialism, deontology, virtue, social justice, and care ethics. The ethical layer aggregates belief scores from multiple LLM-based moral perspectives using Belief Jensen-Shannon Divergence and Dempster-Shafer Theory to generate a probability score that serves as a shaping reward, guiding the agent to make choices aligned with a balanced ethical framework. This integrated learning framework helps RL agents navigate moral uncertainty in complex environments and make morally sound decisions across a variety of tasks. Testing multiple LLM variants and comparing them to other belief aggregation techniques demonstrates improved consistency and adaptability, and reduced reliance on handcrafted ethical rewards. This approach is particularly effective in dynamic scenarios where ethical issues arise unexpectedly, making it suitable for practical applications.