This paper presents a novel approach to study how well representations of large-scale language models (LLMs) match human representations. We use activation steering to identify neurons for specific concepts (e.g., “cat”) and analyze their activation patterns. We show that the LLM representations captured in this way are very similar to human representations inferred from behavioral data, and are consistent with human-to-human agreement. The agreement is much higher than word embeddings, which have been used in previous studies, and demonstrate that LLMs organize concepts in a human-like manner.