This study utilized a large-scale language model (LLM) and the word2vec algorithm to overcome the limitations of previous studies assessing scientific concept understanding through children's drawings (task-dependent picture content and subjective interpretation by researchers). We analyzed 1,420 children's drawings on nine scientific topics to explore the consistency of their picture representations across topics and to propose a standard for children's scientific drawings. The results confirmed the presence of consistency in most of the drawings, demonstrating high semantic similarity (mostly >0.8). However, we also found a consistency bias, which was independent of LLM accuracy. We also analyzed the correlation between factors such as sample size, abstraction level, and focus and the picture consistency and LLM recognition accuracy, and examined whether these factors reflected the course content. The results confirmed that LLM recognition accuracy was the most sensitive indicator, and was also related to sample size and semantic similarity. Furthermore, we found that consistency between the instructional experiment and the educational objectives was an important factor, with many students tending to focus on the experiment itself rather than the explanation.