This paper focuses on the generation of personalized human images depicting specific identities from reference images. While existing methods have achieved high-fidelity identity preservation, they are limited to single-ID scenarios and lack face editing capabilities. In this paper, we present DynamicID, a tuning-free framework that supports single-ID and multi-ID personalization generation with high fidelity and flexible face editing capabilities. Key innovations include Semantic-Activated Attention (SAA), which minimizes the interference of the base model when injecting ID features and achieves multi-ID personalization without multiple ID samples during training; Identity-Motion Reconfigurator (IMR), which effectively separates and reconfigures facial motion and ID features to support flexible face editing; a task-separated training paradigm that reduces data dependency; and VariFace-10k dataset, where 10,000 unique individuals are represented by 35 different face images each. Experimental results show that DynamicID outperforms state-of-the-art methods in terms of identity fidelity, face editing capabilities, and multi-ID personalization capabilities.