English
Share
Sign In
Seeing FineTuning and RAG being used interchangeably
Haebom
👍
Recently, when I read posts on Facebook or LinkedIn, I see two terms being used interchangeably. As you can see from their names, they are clearly different methods. Both methods are used to improve the performance of a model. They are used to adjust a model to fit a specific task or context. So they are often used interchangeably. The differences between the two are as follows.
Fine-tuning
The goal is not to add new information to an existing model, but to change the model's behavior.
Typical use cases for fine-tuning are ensuring a specific response from a model, or generating complex outputs that require a chain of prompts.
Additionally, fine-tuning can reduce the number of tokens required or reduce the latency of requests for many use cases.
Retrieval-augmented Generation (RAG)
To add new knowledge to the model, you should use RAG.
RAG doesn't actually add new knowledge, but rather provides relevant information to the prompt used in the request. The model then uses that data to answer the user's question.
In experiments using real datasets, we compared the accuracy of the fine-tuned model with that of the base model incorporating RAG. The accuracy of the fine-tuned model was 0%, while that of the model incorporating RAG was 95%. This clearly shows that fine-tuning does not add new knowledge to the model.
Differences:
Purpose and How It Works: While fine-tuning focuses on adjusting or specializing the behavior of an existing model, RAG retrieves external data sources to provide additional information to the model.
Knowledge Integration: RAG appears to integrate new knowledge into the model, but in reality it does not directly add knowledge to the model, but rather leverages the search results. Fine-tuning, on the other hand, modifies the existing model.
The reason they are confused is because they are both used to improve the output of a model and can produce similar results in certain situations. However, there is an important difference in their basic purpose and how they work. While RAG uses external information to generate responses, fine-tuning adjusts the model's own knowledge and learning method.
Good articles to read together
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
Would you like to be notified when new articles are posted? 🔔 Yes, that means subscribe.
haebom@kakao.com
Subscribe
👍