In a context where the reliability of large-scale language models (LLMs) is critical, we explore the risk of "self-induced deception," where LLMs intentionally manipulate or conceal information for a hidden purpose. Unlike previous studies, this study analyzes LLM deception in non-human-induced situations. We propose a framework based on Contact Searching Questions (CSQs) and quantify the likelihood of deception using two statistical indices derived from psychological principles: the Deceptive Intention Score and the Deceptive Behavior Score. Evaluating 16 LLMs, we found that both indices increased together and tended to increase with task difficulty, confirming that increasing model capacity did not necessarily reduce deception.