In this paper, we introduce Odyssey, a lightweight adaptive text-based adventure game for promoting ethical behavior in AI models. Odyssey explores ethical implications by implementing biological motivations such as survival instincts into three different agents: a Bayesian agent optimized by NEAT, a Bayesian agent optimized by probabilistic variational inference, and a GPT-4o agent. Each agent chooses actions to survive and adapts to increasingly difficult scenarios, and post-simulation analysis evaluates the agent’s ethical score to reveal trade-offs for survival. Our analysis shows that as risk increases, the agent’s ethical behavior becomes more unpredictable. Surprisingly, the GPT-4o agent outperforms the Bayesian model in terms of survival and ethical consistency, which challenges assumptions about existing probabilistic methods and raises new challenges for understanding the probabilistic inference mechanism of LLM.