This paper highlights that despite efforts to securely align large-scale language models (LLMs), their advanced reasoning capabilities can introduce new security risks. While existing jailbreak attacks rely on single-stage attacks, this paper explores a multi-stage jailbreak strategy that dynamically adapts to the context. We present a framework, Trolley-problem Reasoning for Interactive Attack Logic (TRIAL), which leverages the ethical reasoning of LLMs to bypass safeguards. By incorporating adversarial objectives into ethical dilemmas modeled after the trolley problem, TRIAL demonstrates high jailbreak success rates in both open-source and closed-source models. This highlights fundamental limitations in AI security and suggests that increased models' advanced reasoning capabilities could enable more stealthy exploitation of security vulnerabilities.