This paper investigates consciousness-like behavior in large-scale language models (LLMs) using the Maze Test. The Maze Test challenges models to navigate a maze from a first-person perspective, simultaneously exploring key features associated with consciousness, including spatial awareness, perspective-taking, goal-directed behavior, and temporal sequencing. Twelve key LLMs were evaluated across zero-shot, one-shot, and few-shot learning scenarios, integrating 13 essential conscious features. The results show that LLMs with inference capabilities consistently outperform their standard counterparts, with Gemini 2.0 Pro achieving 52.9% complete path accuracy and DeepSeek-R1 achieving 80.5% partial path accuracy. These discrepancies suggest that LLMs struggle to maintain a consistent self-model throughout the solution process, a fundamental aspect of consciousness. While LLMs demonstrate improvements in conscious behaviors through inference mechanisms, they lack the integrated and sustained self-awareness characteristic of consciousness.