Large-scale language models (LLMs) suffer from performance degradation when processing long contexts due to preemptive interference from irrelevant information from previous contexts, which hinders inference and memory recall. Unlike previous research that focuses on external memory systems to enhance LLM performance, this paper proposes an approach that actively shapes LLM's internal working memory by providing tools for Active Context Management (ACM). Through a framework called Sculptor, LLMs are equipped with three categories of tools: (1) context segmentation, (2) summarization, hiding, and restoration, and (3) precision retrieval. This approach enables LLMs to actively manage attention and working memory. Experimental evaluations on various long-term context benchmarks demonstrate that Sculptor significantly improves LLM performance without requiring specific training, leveraging its unique tool-recall and instruction-following capabilities. Furthermore, to optimize this strategy, we introduce a novel dynamic context-aware reinforcement learning (RL) approach that advances the training of agents that actively modify conversation transcripts. Through active context management, Sculptor not only mitigates preemptive interference but also provides a cognitive foundation for more reliable inference across diverse, long-term contextual tasks. This highlights that explicit context control strategies, rather than simply larger token windows, are key to robustness at scale.