This paper proposes UltraEdit, an efficient and scalable model editing method for large-scale language models (LLMs) that adapt to continuous information updates. UltraEdit is a training-, topic-, and memory-free approach that computes parameter changes in a single step using only hidden states and gradients. Furthermore, it employs a continuous regularization strategy to adapt to distributional changes and maintain consistency. UltraEdit achieves editing speeds over 7x faster than existing state-of-the-art methods and is the only method capable of editing a 7B LLM on a 24GB GPU. We build a large-scale dataset called UltraEditBench, which contains over 2 million edit pairs and supports up to 2 million edits, demonstrating excellent performance across a variety of model editing scenarios.