In this paper, we propose a dataset distillation technique using a generative model to alleviate the dependency on large datasets. Unlike existing methods that focus on the consistency with the original dataset, this paper proposes a task-specific sampling strategy to improve the performance of specific downstream tasks such as classification tasks. This is a method that generates a dataset by obtaining a sampling distribution that matches the difficulty distribution of the original dataset from the image pool, and applies a log transformation as a preprocessing step to correct the distribution bias. Through extensive experiments, we verify the effectiveness of the proposed method and suggest its applicability to other downstream tasks. The code is available on GitHub.