Advances in large-scale language models (LLMs) have made it difficult to distinguish LLM-generated texts from human-generated texts. Instead of categorizing texts as human- or machine-generated, this study characterizes texts using various linguistic features, such as morphology, syntax, and semantics. We select human- and machine-generated texts from eight domains and 11 LLMs and compute various linguistic features, such as dependency length and sentiment, using sampling strategies, iteration control, and model release dates. Human-generated texts exhibit simpler syntactic structures and more diverse semantic content. Calculating feature variability across models and domains revealed that both human- and machine-generated texts exhibited varying styles across domains, with human-generated texts exhibiting greater variability. We further tested the variability between human- and machine-generated texts using style embeddings. We found that the most recent models produced texts with similar variability, suggesting a homogeneity of machine-generated texts.