This paper investigates the fairness issue of large-scale language models (LLMs) using the Chain-of-Thought prompting technique, a recently gaining attention technique. We analyze not only the output of LLMs, which contain various biases such as gender, race, socioeconomic status, appearance, and sexual orientation, but also the model's internal thought processes (thinking steps) using Chain-of-Thought prompting to measure the presence and extent of bias. Quantitatively analyzing 11 biases across five popular LLMs, we found no significant correlation between the biases in the model's thought processes and those in its final output (correlation coefficients less than 0.6, p-value < 0.001). This suggests that, unlike humans, models that make biased decisions do not always exhibit biased thought processes.