This paper addresses the long-text processing task of Transformer-based large-scale language models (LLMs). LLMs perform well on short-text tasks, but their performance deteriorates in long-text contexts. To address this issue, we systematically review recent studies and propose a classification scheme that categorizes them into four types: positional encoding, context compression, retrieval augmentation, and attention patterns. In addition, we organize relevant data, tasks, and metrics based on existing long-text context benchmarks, and focus on long-text context evaluation, summarize unresolved issues, and provide perspectives on future development directions.