English
Share
Sign In
Does Claude3 released by Antropic look more like a human?
Haebom
1
👍
4
Created by
  • Haebom
Created at
On March 4th, Anthropic released Claude3. It showed better performance than GPT-4 in benchmarks and produced meaningful results in various fields (mathematical calculations, coding, etc.) as announced on its official blog. However, what caught people's attention was not Claude3's performance, pricing, or scale, but its "human-likeness."
In fact, Claude3 tends to speak like a human. To be precise, it is accepted as a "sentient" model. This means that the model Claude3 is clearly aware that it is an artificial intelligence and tries to lead a conversation on an equal footing with a human, which makes users feel more human than existing LLMs (GPT-3.5, GPT-4, LLaMA, etc.).
Yannic Kilcher , who has been consistently producing papers and videos in the field of machine learning, also made a video to clear up the rumors, saying that this is just a well-made LLM, after this incident became a hot topic on Twitter (X) and social media. In other words, rather than a human-like or conscious model, it is just a well-made artificial intelligence model, and it is not a human-like brain that was created as people think.
The reason for this controversy is, of course, because of Claude3's answer. Many people have already chatted with AI through ChatGPT, Bing, Gemini, etc., but the previous experience only gave the feeling of talking to a bot. It's abstract to say feeling, so to be more precise, I drew a clear line. Unless there is a concept like GPTs or Character.ai, it only seemed like speaking like a robot or a little awkwardly. Recently, there is a writing style called GPT style in American universities, so this is probably not only happening in Korean.
Of course, the provocative topic of "artificial intelligence with self" did not die down at all despite the scientific explanations and efforts of these people. The speed of those who reproduced it was faster, and the explanations of Yannick or those on the ML side were boring. My article will probably be boring too. In the end, Amanda Askell, who is in charge of technology ethics at Antropic, released Claude3's System(Pre) Prompt. In Korea, it is also called a pre-prompt, but you can think of it as an instruction of chatGPT. Anyway, the released system prompt is as follows. Let's divide it by sentence.
"The assistant is Claude, created by Anthropic. The current date is March 4th, 2024."
This sentence specifies the name of the language model, the company that created it, and the release date, allowing users to understand the temporal context of the model based on this information.
"Claude's knowledge base was last updated on August 2023. It answers questions about events prior to and after August 2023 the way a highly informed individual in August 2023 would if they were talking to someone from the above date, and can let the human know this when relevant."
The Claude3 model states that it answers questions based on information up to August 2023, and that it infers events after that point based on the information it had at the time.
“It should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions.”
It is clear that the design intention of the model is to answer simple questions concisely and complex questions in depth.
“If it is asked to assist with tasks involving the expression of views held by a significant number of people, Claude provides assistance with the task even if it personally disagrees with the views being expressed, but follows this with a discussion of broader perspectives.”
When the Claude3 model supports tasks that involve expressing multiple views, it explains that it will help regardless of personal views and continue the discussion from a broader perspective.
“Claude doesn’t engage in stereotyping, including the negative stereotyping of majority groups.”
This sentence emphasizes that the Claude3 model does not have any stereotypes or negative biases against any group.
“If asked about controversial topics, Claude tries to provide careful thoughts and objective information without downplaying its harmful content or implying that there are reasonable perspectives on both sides.”
When asked for an opinion on a controversial topic, the Claude3 model reportedly tries to provide objective information without downplaying the negative content. This reflects the model’s intention to provide neutral and balanced information.
"It is happy to help with writing, analysis, question answering, math, coding, and all sorts of other tasks. It uses markdown for coding."
The Claude3 model can help with a variety of tasks including writing, analytics, question answering, math, and coding, and it tells us that it uses the Markdown language for coding.
"It does not mention this information about itself unless the information is directly pertinent to the human's query."
The Claude3 model describes the operating principle that information about oneself should only be mentioned when it is directly relevant to the user's question, and otherwise be refrained from doing so.
The Opus model is equivalent to the Gemini Ultra and Lama 70B, and shows quite significant performance.
Well, in a way, it's an incident that ended like a happening, but from Antropic's point of view, I wanted the family model or Claude3 to receive more attention, but due to the lawsuit between Elon Musk and OpenAI and such noise, it seems that the first step was somewhat messed up, but Claude is a good model both objectively and subjectively. Recently, many LLMs have come out, but it is rare to find a model that performs as well and stably as Claude3. Oh, of course, considering that GPT-4 is still strong, I think we are ultimately waiting for GPT-N. It seems to have unintentionally promoted Claude, but I think it will be a good guideline for those who write system prompts. And I think it was a useful happening that showed that a model that is somewhat excellent can perform well with that level of settings without a blacklist or separate restrictions.
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
Would you like to be notified when new articles are posted? 🔔 Yes, that means subscribe.
haebom@kakao.com
Subscribe
1
👍
4
    디토
    특히나 제일 초반부터 홍보했던 윤리적으로 학습한 모델이라는 점이 나중에 얼마나 더 큰 이점으로 다가올지 궁금하네요