This paper proposes CopyrightShield, a defense framework against copyright infringement attacks on diffusion models. It focuses on attacks where attackers intentionally inject non-copyrighted images into training data, thereby inducing the generation of copyright-infringing content for specific prompts. CopyrightShield analyzes the memory mechanism of the diffusion model to reveal that the attack exploits overfitting to specific spatial locations and prompts. It then proposes a method for detecting toxic samples using spatial masking and data imputation. Furthermore, it reduces the dependence on copyright infringement features and maintains generation performance through an adaptive optimization strategy that incorporates dynamic penalty terms into the training loss. Experimental results show that CopyrightShield significantly improves toxic sample detection performance under two attack scenarios, achieving an average F1 score of 0.665, a First Attack Era (FAE) delay of 115.2%, and a 56.7% reduction in the Copyright Infringement Rate (CIR). This represents a 25% improvement over the best-performing existing defense.