This paper utilizes historical Olympic medal count data to explore the internal knowledge structure of a large-scale language model (LLM). We evaluate the LLM's performance on two tasks: retrieving the number of medals for a given country and determining the ranking of each country. We find that while state-of-the-art LLMs excel at retrieving medals, they struggle with rankings. This finding highlights the discrepancy between LLM's knowledge organization and human reasoning, highlighting limitations in LLM's internal knowledge integration. To facilitate research, we have made the code, dataset, and model output publicly available.