This study explores how large-scale language models (LLMs) encode interconnected scientific knowledge using chemical elements and the LLaMA family of models as case studies. Our results reveal a three-dimensional helical structure in the hidden state that matches the conceptual structure of the periodic table, suggesting that LLMs can reflect the geometric organization of scientific concepts learned from text. Linear probing revealed that the middle layers encode continuous and nested properties that enable indirect recall, while deeper layers clarify categorical distinctions and integrate linguistic context. These results suggest that LLMs represent symbolic knowledge not as isolated facts but as structured geometric manifolds that interweave semantic information across layers. This study is expected to inspire further exploration of how LLMs represent and infer scientific knowledge, especially in fields such as materials science.