This paper focuses on the challenge of addressing the diversity of number systems across languages, exploring why large-scale language models (LLMs) struggle to solve linguistic-mathematical puzzles using these systems. We demonstrate that while humans successfully solve these puzzles, LLMs struggle. We conduct experiments to disentangle the linguistic and mathematical aspects of number composition and combination. Our results show that LLMs consistently fail to solve problems unless the mathematical operation is explicitly represented as a symbol (e.g., "20 + 3"). Furthermore, we analyze the impact of individual parameters of number composition and combination on performance. We conclude that while humans understand and reason about the inherent structure of number systems, LLMs lack a concept of this inherent structure.