This paper explores stylometric analysis as a way to distinguish between texts generated by large-scale language models (LLMs) and human-written texts. To address issues such as model attribution, intellectual property rights, and ethical use of AI, we apply existing stylometric techniques to LLM-generated texts to identify novel narrative patterns in them. We create a benchmark dataset consisting of human-written summaries from Wikipedia, texts generated by various LLMs (GPT-3.5/4, LLaMa 2/3, Orca, Falcon), and texts subjected to multiple text summarization methods (T5, BART, Gensim, Sumy) and paraphrasing methods (Dipper, T5). We classify 10-sentence texts using tree-based models such as decision trees and LightGBM, using stylometric features including lexical, grammatical, syntax, and punctuation patterns. We achieve up to 0.87 Matthews correlation coefficient in a 7-class multi-class scenario, and 0.79–1.0 accuracy in binary classification. In particular, for Wikipedia and GPT-4, we achieved accuracies up to 0.98 on balanced datasets. Through Shapley Additive Explanations, we identified characteristic features of encyclopedia-type texts, such as overused words, and higher grammatical standardization of LLMs compared to human-written texts. These results demonstrate that, in the context of increasingly sophisticated LLMs, machine-generated and human-generated texts can be distinguished for certain types of texts.