English
Share
Sign In
📖

Additional information prompting

Generated Knowledge Prompting is a technique proposed by Jiacheng Liu's research team in the paper <Generated Knowledge Prompting for Commonsense Reasoning> published in 2022. Generated Knowledge Prompting is a method of generating knowledge directly from a language model and providing it as additional input to answer questions. This method is particularly useful for tasks that require commonsense reasoning or factual accuracy.
The principle is surprisingly simple, and it consists of two steps. First, a language model is used to generate knowledge statements related to the question. Second, the generated knowledge is integrated into the decision process, and predictions are made for each knowledge statement, and the prediction with the highest confidence is selected.
If you've come this far, you've probably figured out what GPTs are. In fact, this is a technique that's also applied to some GPTs. It's a process of inputting knowledge, providing an answer that matches that knowledge, and integrating the content that the user inputs or answers to extract a better answer.
Generate knowledge statements using a language model (using GPT-3.5) tailored to a specific question. This is done through several demonstrations, each of which consists of a question of the corresponding task style and a knowledge statement that helps with it. This knowledge statement is a sentence that explicitly presents information related to a topic or question. It is not a fancy statement.
Here's an example from the paper:
❓
"Is there any part of golf where you try to get a higher score than other people?"
🤖
Part of golf is trying to get a higher score than other people.
Generated Knowledge => The goal of golf is to complete the course in the fewest strokes. The player with the lowest score wins.
❓
"Is there any part of golf where you try to get a higher score than other people?"
🤖
In golf, players aim to complete the course in the fewest strokes. The lowest score wins.
Importance of Generated Knowledge Prompts
Generated knowledge prompts represent a significant advance for LLM, especially in tasks that require understanding of complex or subtle information. By generating and leveraging knowledge within the prompt, LLM can provide more accurate and contextual responses. That is, by incorporating generated knowledge into the prompt, the language model correctly understands that lower scores are better in golf. This example shows that by including generated knowledge, the model’s understanding and accuracy can be greatly improved.
If I may add a little more, in fact, everything becomes easier if you use VectorDB or Embedding Model. It is an effective method when you want to get the best efficiency by utilizing prompts.
🌳
🔍
ⓒ 2023. Haebom, all rights reserved.
It may be used for commercial purposes with permission from the copyright holder, provided the source is cited.