This paper points out the __T2189__ of the existing causal inference methodologies and proposes a novel methodology to extract causal knowledge from text-based metadata by utilizing large-scale language models (LLMs). To address the reliability issue of LLMs, we introduce a consistency measure and focus on inferring causal order instead of directed causal graphs (DAGs) by considering indirect causal relationships. We propose a method to derive a class of acyclic tournaments that maximize the consistency score of LLMs, and use them to estimate causal effects. We verify the effectiveness of the proposed method by conducting experiments using real-world datasets and existing benchmarks in the fields of epidemiology and public health.