In this paper, we propose an implicit human feedback-based reinforcement learning (RLIHF) framework using electroencephalography (EEG) to overcome the limitations of conventional reinforcement learning (RL), which struggles to learn effective policies in sparse reward environments. We utilize error-related potentials (ErrPs) to provide continuous implicit feedback without explicit user intervention, and transform raw EEG signals into probabilistic reward components via a pre-trained decoder to enable effective policy learning even in sparse external reward environments. We evaluate the proposed method on obstacle avoidance and object manipulation tasks using a Kinova Gen2 robotic arm in a simulation environment based on the MuJoCo physics engine. We show that the agent trained with decoded EEG feedback achieves comparable performance to the agent trained with manually designed dense rewards. This demonstrates the potential of leveraging implicit neural feedback for scalable and human-centric reinforcement learning in interactive robotics.