This paper proposes a synthetic long-text context data generation framework to enhance the ability of large-scale language models (LLMs) to process and infer long-text inputs. To address the lack of high-quality, diverse, and verifiable long-text context datasets, we present a modular and extensible framework for generating data through prompt-based LLM interactions. This framework supports various learning and alignment objectives (SFT, DPO, and GRPO) and incorporates four data generation paradigms: multi-round conversations, document-based input-output pairs, verifiable command-response tasks, and long-text inference examples. Template-based prompting, a model-independent architecture, and metadata-rich output facilitate the generation of scalable, controllable, and purpose-specific datasets.