HyperCLOVA X THINK is the first inference-driven large-scale language model pre-trained on approximately 6 trillion Korean and English tokens. It is implemented by adding target synthetic Korean data and extending the Peri-LN Transformer with μP considering the computation-memory balance. It is pre-trained with a three-stage curriculum that expands the context window to 128K tokens and undergoes supervised fine-tuning via reinforcement learning from verifiable rewards. It supports both detailed evidence and concise answer modes and shows competitive performance compared to similar-sized models on Korean-centric benchmarks such as KMMLU, CSAT, KoBALT-700, HAERAE-1.0, and KoBigBench. It also maintains good bilingual consistency and translation quality, and the vision-augmented variant achieves performance on par with or better than GPT-4.1 on the KCSAT STEM benchmark. It achieves this with much less training computation than existing similar-sized models, and also presents pruning and distillation techniques for an open-source and business-friendly base model.