Sign In

AI UX Study Group Sharing Session 3/3

Haebom
This ended up turning into a three-part series because I failed to control the length—I apologize for that. The final topic is step 5, user retention and preventing churn. Let's get started.

Step 5: Retention (User retention and preventing churn)

Personally, I find that steps 1 through 4 are all starting to look the same. It's not really a creativity issue, just that so many products and services are flooding the market. Figuring out how to present things through design and visual direction has become the key challenge.
On the other hand, as more services offer similar features, it becomes much easier for users to leave. One thing I’ve noticed while researching is that users don’t feel any attachment to the ‘generated data.’ They don’t see it as their own valuable content. It’s really a case of easy come, easy go. Because they got big results with minimal effort, they don’t view this data as precious or worth hanging on to. Long story short, when something fancier or cheaper comes along, users just move on. So how do AI services address this? Let’s take a look, one by one.

Specialization

This is a strategy widely used. Regardless of the model used, they talk about what they excel at. For example, character.ai focuses on creating and communicating characters in persona form, while Tensor.art markets itself as providing a more specialized image generation service. Typeset.io, targeting researchers, is another good example.
논문 쓸 때, 논문 리뷰할 때 이제 이거 없이 못살아.. 입니다.
One of the benefits of specialization is that it allows for cost justification. As I've mentioned in other blog posts, considering that services like OpenAI and Google typically charge $20 per month, specialized services can easily lock users in even if they charge more, as soon as they become irreplaceable or deeply involved in their work. A prime example, personally, is LBOX, a legal service in South Korea . LBOX, a legal AI service that provides legal judgments and litigation information to lawyers, judicial officers, and law students, charges over $55 per month, yet it hasn't experienced any user churn.

Personalization

Personalization is a topic that comes up constantly in AI. Especially in services with existing info or in the education field, there’s a focus on delivering custom curricula and the like. Things like 'tailored for you!' are everywhere, but nobody has perfectly realized this yet. The reason is simple: it’s hard to get permission to use user data, and fine-tuning for each individual is way too costly.
When Notion first introduced AI, it cleverly secured Workspace permissions, and lately, platforms like Salesforce or Google Workspace have started doing similar things. At the end of the day, data ownership is about controlling how an AI model remembers and uses your data—an essential feature for balancing privacy and AI model improvement.
Usually, you’ll find this setting under user or company preferences, presented as a basic on/off toggle with a short explanation on how it helps model improvement. Approaches differ: most services default to 'on,' but user-centric companies like Figma set it to 'off' by default.
There's also a difference between free and paid users. In many cases, data sharing opt-out is only available on premium plans. For enterprise accounts, this type of setting is typically managed by an admin, not individuals. In simple terms, if you provide your data, you get something personalized for you, but if you don’t agree, things might be a bit less smooth.
We’ve actually seen this before with iOS—things like app tracking, collecting user info and behavioral data for marketing or ad exposure. If you think about personalized ads as a sort of personal AI, it’s easy to make the connection.

Token Optimization

Token layering is a technique where users deliberately combine tokens as they craft prompts, in order to more precisely guide how the AI understands and responds. Simply put, instead of instantly grasping the whole sentence like a human would, AI breaks things down into smaller chunks and refines its analysis step by step. That’s what we call token layering.
It’s a bit like stacking Lego blocks to build a prompt, letting you express your intent more precisely. Token transparency means the AI reveals which tokens it used to generate its response. By seeing this, users can understand the AI’s ‘thought process’ and craft even better prompts.
이게 쉬워보여도 어렵습니다. 구글 마저도 이 토큰 레이어링을 완전 조절하는게 어렵습니다.
There are lots of ways to use these techniques to improve user experience. Take web-based tools like Adobe Firefly: they provide a palette so you can freely write your prompts and easily add tokens for style, structure, references, etc.—an intuitive example of token layering in action. Or services like Google AI Overview and Perplexity, which collect extra tokens by automatically generating follow-up questions after your initial prompt so they can better pinpoint your intent.
For token transparency, a great example is Midjourney’s /describe command—this shows which tokens were used to make an image, letting users understand how AI interpreted their request and edit as needed. Audio generators like Udio include relevant tokens in the metadata of generated files, making it easy for users to search and create similar tracks.
These kinds of features help users understand how the AI works and get better results. They encourage creativity too, and make using AI more efficient. Ultimately, this can raise satisfaction and encourage users to stick around. Of course, when adding these features, it’s important to keep the user's learning curve in mind and roll out more complex options step by step.

Solution-type Offering

This approach is most common on the B2B side, as services are offered for enterprises. Usually, this means your data is processed at a physical data center or via an open-source model tuned to fit the client. These days, a lot of SI companies use things like LLaMA3 or Qwen to deliver on-premise models. As always, you need to decide up front—do you fully outsource the work, or do you handle building and operations yourself and just get some help with it?
If you go fully outsourced, it might be faster at first, but maintenance and feature updates get tougher down the line. Building or tuning in-house is slower to start, but long-term you get more scalability and ownership. Lately there’ve been a lot of MLOps-type solutions rolling out too.

Referral System + In-house Credits

While traditional referral systems offered physical goods, this new system uses internal securities (or credits) for use. In fact, it's a continuation of a method frequently used in existing SaaS marketing. For example, Gamma.app, a slide creation service, offers 200 credits per invite. This is a significant reward, considering that around 40 credits are used to create a slide.
However, one drawback to this credit system is that credits can pile up too much, or you need some sort of ‘drain’ strategy. I remember Notion tried exactly this in the past, but eventually switched to an affiliate model. Still, as an early growth tactic, it works. It’s also a good way to boost user retention.
Here's a rough summary. Since this was organized from a planning and UX perspective, I think it would be even more effective if we incorporated perspectives from marketing and developers. Feel free to share, and if you're interested in participating, please email haebom@kakao.com .
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
haebom@kakao.com
Subscribe