This paper is a study on characterizing texts generated by large-scale language models (LLMs) and human-written texts using various linguistic-level features such as morphology, syntax, and semantics. Using 11 LLM-generated and human-written text datasets across 8 domains, we computed various linguistic features such as dependency length and sentiment. Statistical analysis results showed that human-written texts tend to have simpler syntactic structures and more diverse semantic content. In addition, we computed the variability of features according to models and domains, and both human and machine texts showed style diversity depending on the domain, but human texts showed greater variability. We further verified the variability of human-written and machine-generated texts by applying style embedding, and the latest model outputs texts with similar variability, suggesting the homogeneity of machine-generated texts.