This paper focuses on systematically measuring and organizing the knowledge encoding of large-scale language models (LLMs). Specifically, we address the shortcomings of research on converting LLM knowledge into a structured format using recursive extraction methods, such as the GPTKB methodology. We analyze the termination rate, reproducibility, and robustness to fluctuations of this extraction process. Using miniGPTKB (domain-specific, tractable subcrawl), we measure termination rates, reproducibility, and three categories of metrics (yield, lexical similarity, and semantic similarity). The study was conducted across four variants (seed, language, randomness, and model) and three representative domains (history, entertainment, and finance).