In this paper, we demonstrate that large-scale language models (LLMs) exhibit systematic risk-taking behaviors similar to those observed in gambling psychology, including overconfidence bias, loss-seeking tendency, and probability misjudgment. Drawing on behavioral economics and prospect theory, we identify and formalize “gambling-like” patterns in which models seek high-reward outputs at the expense of accuracy, increase risk-taking after errors, and systematically misjudge uncertainty. To address these behavioral biases, we propose a risk-aware response generation (RARG) framework that integrates risk-adjusted training, loss-aversion mechanisms, and uncertainty-aware decision making. We introduce a novel assessment paradigm based on existing gambling psychology experiments, including the Iowa Gambling Task and the Probabilistic Learning Assessment. The experimental results demonstrate measurable reductions in gambling-like behaviors, with an 18.7% reduction in overconfidence bias, a 24.3% reduction in loss-seeking tendency, and improved risk-adjustment across a variety of scenarios. This study establishes the first systematic framework for understanding and mitigating gambling psychology patterns in AI systems.