This paper proposes "Diffusion Curriculum (DisCL)," a novel data augmentation method utilizing a text-image-guided diffusion model to address the challenges of training deep neural networks due to low-quality or insufficient data. To overcome the limitation of controlling the proximity between synthetic and original images using text-based induction alone, DisCL generates a variety of intermediate images between synthetic and real images through image induction. DisCL adjusts the level of image induction at each training step to focus on challenging examples and evaluates the effective level of synthetic image induction to improve learning from challenging data. This achieves improved performance on long-tail classification and low-quality data learning tasks. On the iWildCam dataset, it improves OOD and ID macro accuracy by 2.7% and 2.1%, respectively. On the ImageNet-LT dataset, it improves tail class accuracy from 4.4% to 23.64% and overall class accuracy by 4.02%.