In this paper, we present Policy Iteration with Turn-over Dropout (PIToD), a novel method for efficiently estimating the influence of experiences on the performance of reinforcement learning (RL) agents using experience replay. PIToD efficiently addresses the computational cost of the traditional leave-one-out (LOO) method. We evaluate how accurately PIToD estimates the influence of experiences and how much more efficient it is than LOO. We also demonstrate that PIToD can improve the performance of low-performing RL agents by identifying experiences with negative influence and removing the influence of these experiences.