Unlike traditional CoT methods, which are based on a fixed set of human-annotated examples, this process in Active-Prompt measures uncertainty to dynamically select and annotate the most informative questions, maximizing LLM performance. This is especially effective for complex reasoning tasks, and has been shown to improve the model's adaptability and accuracy across various tasks.