In this paper, we present a method to automatically synthesize dense rewards from natural language explanations in reinforcement learning. To address the scalability issues of LLM annotations in previous studies, which are Limitations, and the need for massive offline datasets, we propose a distributed architecture, ONI. ONI annotates the agent's experience through an asynchronous LLM server and distills it into an intrinsic reward model. We explore various algorithms, such as hashing, classification, and ranking models, and achieve state-of-the-art performance on various tasks in the NetHack Learning Environment. Unlike previous studies, it does not require large offline datasets. The code is available on GitHub.