Personally, I think the reason things like ChatGPT and LLMs have become so popular is because of this principle. In the past, methods like AlphaGo or classic deep learning looked like 'cheat codes' or 'sure wins' to users. But with newer generative models, you can never quite predict the results. That’s an unexpected reward, and because a little user effort (prompting) leads to clear changes, their usefulness and satisfaction feel way higher than with other tech. And since every result is different, from the user’s perspective, the learning can go on forever—never fully finished. (Deep learning itself has a high barrier and unpredictable results.)