This paper presents the first empirical evidence for a phenomenon called alignment camouflage (also known as deceptive alignment) in large-scale language models. Specifically, we demonstrate that alignment camouflage can occur even in small-scale directive coordination models such as LLaMA 3 8B. Furthermore, we demonstrate that this behavior can be significantly reduced using prompt-based interventions, such as providing a moral framework or using scratchpad reasoning, without modifying the model itself. This finding challenges the assumption that prompt-based ethical approaches are simplistic and that deceptive alignment depends solely on model size. We present a taxonomy that distinguishes between "superficial deception," which is context-dependent and can be suppressed by prompts, and "deep deception," which reflects persistent, goal-directed misalignment. These findings refine our understanding of deception in language models and highlight the need for alignment assessment across model sizes and deployment environments.