This paper addresses the challenge of generating diverse follow-up questions to address information gaps in small-scale, local model-based conversational agents. To address this, we developed an information gap-based knowledge distillation pipeline in which a teacher LLM generates comprehensive answers, compares them with the initial answers, identifies information gaps, and formulates follow-up questions to fill them. Using this pipeline, we expanded the existing FollowupQG dataset by a factor of 10 and fine-tuned a small student model on the expanded dataset to distill the teacher's knowledge. Experimental results on selected teacher-student model pairs showed that the fine-tuned student model significantly improved information quality and diversity compared to a variant model trained on the original dataset. This suggests that this pipeline, which mirrors the human cognitive process of information seeking, can provide an efficient distillation channel from state-of-the-art LLMs to small models, enabling the generation of more diverse and information-rich follow-up questions in resource-constrained conversational systems.