This paper aims to demonstrate the usefulness of the n-gram language model even in the era of the Large Language Model (LLM) by modernizing the existing n-gram model using large-scale data of 5 trillion tokens. In particular, we developed an infinite n-gram (∞-gram) model that can set the value of n arbitrarily large, and an infini-gram engine that calculates the ∞-gram probability with a millisecond-level delay based on a suffix array. Through this, we performed analysis of human-written and machine-generated texts, and confirmed the high accuracy (47%) of the ∞-gram model and the perplexity reduction effect of the LLM. In addition, we discovered defects in the positional embedding of the Transformer and the LLM pre-training through analysis of machine-generated text.