In this paper, we propose a training-free safety assessment method that utilizes the internal information of pre-trained LLMs instead of the traditional expensive guard models to solve the safety and alignment problems of large-scale language models (LLMs). We show that the LLM can recognize harmful inputs through simple prompting and distinguish safe and harmful prompts in the latent space of the model. Based on this, we propose the Latent Prototype Moderator (LPM), a lightweight, custom-built add-on that uses the Mahalanobis distance in the latent space to assess the safety of inputs. The LPM generalizes to various model families and sizes, and performs on par with or better than state-of-the-art guard models on several safety benchmarks.