Sign In
🤹

Active Prompts: Boost Performance with Varied Examples

Active-Prompt has several important differences compared to traditional CoT approaches. Whereas conventional CoT methods rely on a fixed set of human-annotated examples, Active-Prompt dynamically applies specific example prompts to a variety of tasks. The Active-Prompt process follows these steps:
1.
Uncertainty Estimation: A large language model (LLM) is used to repeatedly ask questions about a particular task, generating multiple possible responses. Uncertainty metrics—such as the number of diverging answers, entropy, and variance—are then used to calculate the uncertainty for each question.
2.
Selection: The questions with the highest calculated uncertainty are chosen for annotation.
3.
Annotation: Human annotators provide reasoning and answers for the selected questions, creating new example cases.
4.
Inference: Using these newly annotated examples, inference is performed for each question, and the most consistent answer is selected.
Active Prompting with Chain-of-Thought for Large Language Models.pdf914.57KB
Unlike traditional CoT methods, which are based on a fixed set of human-annotated examples, this process in Active-Prompt measures uncertainty to dynamically select and annotate the most informative questions, maximizing LLM performance. This is especially effective for complex reasoning tasks, and has been shown to improve the model's adaptability and accuracy across various tasks.
"영희가 장거리 하이킹을 위해 백팩을 싸고 있는데, 그녀는 무엇을 가져가야 할까요?"
For this question, the language model generates multiple reasoning paths and answers. For instance, it might produce a checklist of items needed for hiking or recommend equipment for a particular scenario. Among these answers, those with higher uncertainty are selected and further refined by humans through annotation and revising the reasoning process.
"철수가 10층짜리 호텔에서 방을 예약했습니다. 각 층에는 동일한 방이 10개씩 있는데, 모든 방을 사용할 수 있을까요?"
The language model generates multiple answers to this question and chooses the answer with the greatest uncertainty. For example, the model might say, “Since the top floor can't be used, there are a total of 90 available rooms.”
The key to the Active-Prompt approach is identifying the most uncertain answer among those generated by the language model and reinforcing it with human annotation to achieve more accurate and trustworthy results.
🐒
⬆️
ⓒ 2023. Haebom, all rights reserved.
This may be used for commercial purposes with the copyright holder's permission, as long as the source is credited.
👍