Sign In

"Prompt Engineering"—so, do we really need it?

Haebom
This comes up often. Should I be learning "prompt engineering"? Should I take a course? I keep hearing it's an essential and important role in the AI era. I hear these stories frequently—no, all the time. And I always say this.
If what you expect from prompt engineering is simply how to ask good questions or get outputs in a certain style or format, you really don't need to bother with it. That's more like a lesson in speaking politely or writing up documents nicely.
"Then what is real prompt engineering?"
In my previous posts and in quality educational resources I often reference, the definition of prompt engineering has been stated countless times, and the ways to teach it are becoming more established. At its core, prompt engineering is the skill of getting the machine to respond according to human intent. The key point here is the "machine"—meaning there's an underlying logic, and to a certain extent things can be predicted.
Anthropic, the company behind Claude, says if you're clear on the three points below, you're already pretty much doing prompt engineering.
A clear definition of success criteria for your use case
Some ways to test these criteria based on actual experience
First prompt to improve ← The first prompt can’t be perfect!
Actually, aside from this, we've also seen articles and papers reporting that prompt engineering is faster and more effective than fine-tuning at delivering good results. Perhaps feeling frustrated, OpenAI and Anthropic have even started releasing their own tips, examples, and training about prompt writing.
OpenAI has also released their System Prompt, and Google and Anthropic have similarly shared system prompts (instructions for LLMs), continually showing their reasons for default settings and how they put them to use. Honestly, if I were just going to talk about this, I wouldn't have written this post at all. As I mentioned already, I've said the same things in my previous posts, and those interested have probably looked it up themselves anyway... The actual reason I decided to write this is because I watched the video below—a podcast hosted by actual Anthropic engineers!

A Look at Prompt Engineering: Expert Perspectives and Insights

Prompt engineering is a field that emphasizes clear communication, iterative refinement, and contextual understanding to optimize interactions with models—which in turn can maximize the performance of AI.
🔍 Prompt engineer Zack Witten defined prompt engineering as the skill of drawing out optimal performance by communicating efficiently with models, comparing it to a trial-and-error process.
⚙ Unlike traditional programming, prompt engineering is characterized by learning through repeated experimentation and feedback, which can be used to improve how the model responds.
🗣Zack emphasized that a great prompt engineer needs clear communication, the ability to iterate, and the skill of anticipating edge cases as key qualities.
💡 Amanda, who works on fine-tuning the language model, pointed out how much effective prompts can impact a model's performance, and emphasized the importance of clear communication between humans and models.

The Evolution and Future of Prompt Engineering

🤔 In this podcast, they address the complexities of AI inference and suggested that while anthropomorphic model interaction can cause misunderstandings, structured reasoning can actually improve model performance.
💡 It was found that presenting clear examples and doing repeated reasoningimproves model performance.
🔍 Good grammar and punctuation were said to improve clarity, but there was consensus thatthe model's understanding ability is more important.
🔑 The future of prompts is expected to become an interactive relationship where models gather information from users and improve prompts.
🤖 As AI technology advances, it's expected that prompt engineers will shift to a role of facilitating dialogue with AI.
Among them, Zack Witten, who's currently working as a prompt engineer at Anthropic, gives a clearer explanation about what exactly he does. Rather than just giving commands or following ethical guidelines, he talks about how to approach the job based on his own previous experience as a Machine Learning Engineer and why this approach is meaningful. If you open the toggle above, you'll see the summarized video content.
I'm personally proud of this article because it captures exactly what I mentioned in "A Guide for Humans Using AI ." I wrote it in the summer of 2023, and it's still very popular, and what those who actually work as prompt engineers say about it suggests that the essence remains unchanged.
As I mentioned in my previous study sharing post, users end up spending the most time wondering what to ask AI and how to use it, and once the "purpose" becomes clear, the methods and tools for using it will rapidly develop, disappear, and take their place.
"Without a clear goal, no matter how hard you work, it's difficult to achieve results." We often hear this saying, but we tend to forget it as we go through life. As I realized while talking with Hyunseon today , once you've decided what to do, you can freely change and use the tools and direction. I hope those reading this, rather than being dazzled by the term "prompt engineering" or the theme of artificial intelligence, take the time to reflect on what you wanted to do and what you intend to do. I find myself forgetting it every time. If I don't commit to something, I'll forget it.
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
haebom@kakao.com
Subscribe