This paper analyzed developers' refactoring activities through a large-scale empirical study and utilized a large-scale language model (LLM) to identify the underlying motivations for refactoring from version control data. By comparing the motivations identified in the literature with those derived from the LLM, we demonstrated that the LLM can effectively identify developers' refactoring motivations. Specifically, the LLM provided more detailed rationales for readability, clarity, and structural improvements, providing richer information than previous studies. Most motivations were pragmatic, focusing on simplification and maintainability. While metrics related to developer experience and code readability ranked highly, their correlations with the motivation categories were weak. In conclusion, the LLM effectively identifies surface-level motivations but struggles with architectural inference. We propose that a hybrid approach combining the LLM and software metrics can be useful for systematically prioritizing refactoring and balancing short-term improvements with long-term architectural goals.