When large-scale language models (LLMs) use in-context learning (ICL) to solve novel tasks, they must infer latent concepts from demonstration examples. This study explores how a transformer model represents latent structures through mechanistic interpretability. Our results demonstrate that the transformer model successfully identifies latent concepts, performs step-by-step concept construction, and, in tasks parameterized by latent numerical concepts, discovers a low-dimensional subspace within the model's representation space, revealing a geometric structure reflecting the underlying parameterization. Both small and large models demonstrate the ability to isolate and utilize latent concepts learned using the ICL method from a small number of abbreviated demonstrations.