This paper focuses on sentence-level analysis to address the interpretability issue of long-form reasoning in large-scale language models (LLMs). To understand LLMs' reasoning processes, we propose three complementary attribution methods: first, a black-box method that measures the counterfactual importance of each sentence; second, a white-box method that aggregates attention patterns across sentences to identify "broadcast" and "receive" attention heads; and third, a causal attribution method that suppresses attention to one sentence and measures its influence on other sentences. All three methods reveal the existence of "thought anchors" that exert an undue influence on the reasoning process, demonstrating that these anchors are primarily thought-provoking or reflective sentences. Finally, we provide an open-source tool for visualizing thought anchors and present a case study demonstrating consistent results across multi-stage inference processes.