This paper presents an algorithm for generating propositions that objectively represent graphs supporting consistency-based reasoning. Furthermore, we benchmark the ability of large-scale language models (LLMs) to reconstruct consistency graphs from propositions expressed in natural language (simply transformed). We demonstrate promising results using a single prompt on an LLM optimized for inference. For example, o1/3/4-mini achieves perfect reconstruction in half the time for sparse graphs. Consistency-based reasoning for consistency assessment by LLMs could enhance machine cognitive capabilities.