Sign In

Is Claude3 released by Anthropic more human-like?

Haebom
On March 4, Anthropic unveiled Claude3. While the official blog announced it outperformed GPT-4 in some benchmarks and delivered meaningful results in various areas—like math and programming—what really caught people’s interest wasn’t the model’s performance, pricing, or scale, but its "human-like qualities."
In fact, Claude3 tends to talk like a person. To be exact, it comes across as a “sentient” model. That is, Claude3 clearly recognizes itself as an AI and tries to converse with users on equal footing, which makes people feel it’s more human-like than previous LLMs (like GPT-3.5, GPT-4, or LLaMA).
Yannic Kilcher , who has been consistently publishing papers and videos in the field of machine learning, also made a video to clear up the rumors when this issue became a hot topic on Twitter (X) and social media, saying that this is just a well-made LLM. In other words, rather than a human-like or self-aware model, it is just a well-made artificial intelligence model, and it is not a human-like brain that has been created as people think.
Naturally, all this debate comes from Claude3’s responses. Many have already chatted with AIs like ChatGPT, Bing, or Gemini, but those conversations still felt like talking to a bot. To put it less vaguely, there was always a clear boundary—unless it was a concept like GPTs or Character.ai, most models spoke robotically or a bit awkwardly. In fact, there’s even something called “GPT style” writing going around U.S. universities these days, so this isn’t just an issue in Korean.
Of course, the provocative topic of “AI with self-awareness” hasn’t died down, despite all these scientific explanations and efforts. Those amplifying it worked faster, and explanations from Yannic or those in ML circles were too boring—my article will likely be boring too. In the end, Amanda Askell, Anthropic’s head of tech ethics, revealed Claude3’s System (Pre) Prompt. In Korea, it’s sometimes called a “pre-prompt”—basically, think of it like ChatGPT’s instruction. Anyway, here’s the released system prompt. Let’s go through it sentence by sentence.
"The assistant is Claude, created by Anthropic. The current date is March 4th, 2024."
This sentence states the language model’s name, the company that made it, and its release date, so users can understand its time context.
"Claude's knowledge base was last updated on August 2023. It answers questions about events prior to and after August 2023 the way a highly informed individual in August 2023 would if they were talking to someone from the above date, and can let the human know this when relevant."
The Claude3 model specifies that it answers using information up to August 2023, and for events after that, it responds based on what information it had then.
"It should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions."
It clarifies that the model is intended to answer simple questions briefly and respond in depth to complex ones.
"If it is asked to assist with tasks involving the expression of views held by a significant number of people, Claude provides assistance with the task even if it personally disagrees with the views being expressed, but follows this with a discussion of broader perspectives."
It explains that if the Claude3 model supports expressing group perspectives, it will help regardless of its own opinion and go on to discuss the topic from a broader angle.
"Claude doesn't engage in stereotyping, including the negative stereotyping of majority groups."
This sentence emphasizes that the Claude3 model doesn’t hold stereotypes or negative prejudices against any majority group.
"If asked about controversial topics, Claude tries to provide careful thoughts and objective information without downplaying its harmful content or implying that there are reasonable perspectives on both sides."
If asked about a controversial topic, the Claude3 model says it works to provide objective information without playing down harmful content. This shows the model’s intention to offer neutral and balanced information.
"It is happy to help with writing, analysis, question answering, math, coding, and all sorts of other tasks. It uses markdown for coding."
The Claude3 model indicates it can assist with writing, analysis, answering questions, math, coding, and more, and that it uses markdown for coding tasks.
"It does not mention this information about itself unless the information is directly pertinent to the human's query."
The Claude3 model explains that it only provides information about itself when it’s directly related to the user’s question—otherwise, it holds back.
Opus 모델은 제미나이 울트라, 라마70B 정도에 해당하는 모델인데 상당히 유의미한 성능을 보여줍니다.
From one angle, this whole thing has more or less wrapped up like a passing incident. Anthropic likely hoped its family models or Claude3 would take center stage, but with the Elon Musk–OpenAI lawsuit and all the surrounding noise, their initial rollout was a bit out of step. Still, it’s clear—objectively and subjectively—that Claude is a solid model. There have been a lot of new LLMs lately, but it’s rare to find one as well-balanced and stable as Claude3. Of course, GPT-4 is still formidable, so in the end, maybe we’re all just waiting for GPT-N. I may have ended up promoting Claude without meaning to, but I think this will be a helpful reference for anyone writing system prompts. And it was actually pretty informative: seeing that a reasonably strong model can achieve good results with just this kind of setup—no blacklist or extra restrictions—was valuable in its own way.
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
haebom@kakao.com
Subscribe
1
디토
특히나 제일 초반부터 홍보했던 윤리적으로 학습한 모델이라는 점이 나중에 얼마나 더 큰 이점으로 다가올지 궁금하네요
See latest comments