This paper focuses on sentence-level analysis to solve the interpretability problem of long-form inference processes of large-scale language models. To understand the inference process of the model, we present three complementary attribution methods. First, a black-box-based method that measures the counterfactual importance of each sentence, second, a white-box-based method that aggregates attention patterns between sentences to identify “broadcasting” sentences, and third, a causal attribution method that measures logical connections between sentences. Through these methods, we reveal the existence of “thought anchors” that have an excessive influence on the inference process, and show that these anchors are mainly planning or reconsideration sentences. We provide an open-source tool to visualize the results of the three methods, and demonstrate the consistency between the methods through a case study of a model that performs a multi-step inference process.