This paper addresses the problem of inference strategy selection bias in test-time scaling (TTS), which improves the performance of large-scale language models (LLMs). Existing TTS methods improve performance by sampling and aggregating diverse inference paths. However, we highlight the problem that LLMs lack solution space exploration, favoring specific inference strategies (e.g., algebraic solutions to mathematical problems) and overlooking other valid alternatives (e.g., geometric solutions). To address this issue, we present a theoretical analysis that identifies the point at which this selection bias hinders the effectiveness of TTS and propose the TTS-Uniform framework to mitigate inference strategy selection bias. TTS-Uniform (i) identifies potential strategies, (ii) evenly allocates the sampling budget, and (iii) filters out unstable strategies before aggregation. Experimental results demonstrate that TTS-Uniform significantly improves scaling on several popular LLMs and benchmark datasets.