This paper presents DistillPrompt, a novel approach to automatic prompt generation (autoprompting), which has been attracting attention due to advances in prompt engineering research for large-scale language models (LLMs). DistillPrompt is an LLM-based autoprompting method that leverages training data to integrate task-specific information into prompts in a multi-step process. Distillation, compression, and aggregation operations are employed to thoroughly explore the prompt space. Experiments with the t-lite-instruct-0.1 language model on various datasets for text classification and generation tasks demonstrate significant performance improvements over existing methods in key metrics (e.g., an average of 20.12% improvement over Grips across the entire dataset). This demonstrates that DistillPrompt is one of the most effective non-gradient-based autoprompting approaches.