This paper addresses the challenges of large-scale language models (LLMs), which face challenges such as hallucinations and security risks due to the limitations of static training data. While the locate-and-edit paradigm, which modifies the model's internal knowledge, has proven to be a cost-effective alternative to retraining, current unstructured approaches, particularly window-based autoregressive methods, often disrupt causal dependencies between initial memory updates and subsequent output tokens. This study theoretically analyzes these limitations and presents Matryoshka Unstructured Knowledge Editing ($\mu$KE), a novel memory update mechanism that preserves these dependencies using Matryoshka-style objectives and adaptive loss coefficients. Experimental evaluations on four benchmarks for two models demonstrate that $\mu$KE improves editing efficiency by up to 12.33% over state-of-the-art methods and remains robust across various editing formats, highlighting the potential of effective unstructured knowledge editing in LLMs.