This paper addresses the problem of real-world machine learning applications that deal with abnormal data distributions and require long-term unsupervised learning. In particular, we focus on the catastrophic forgetting (CF) problem that occurs in online learning environments. CF is a phenomenon in which a model focuses on recent tasks and its prediction performance on previous tasks deteriorates. Existing solutions use a fixed-size memory buffer to store previous samples and reuse them when learning new tasks, but there is a lack of clear guidance on how to effectively utilize prediction uncertainty information in memory management and on the strategy for filling the memory. Based on the intuition that prediction uncertainty represents the location of a sample in the decision space, this paper deeply analyzes various uncertainty estimation and memory filling strategies. We understand the characteristics of data points that are effective in alleviating CF, propose a prediction uncertainty estimation method using the generalized variance induced by the negative log-likelihood, and experimentally demonstrate the effectiveness of prediction uncertainty measures in reducing CF in various environments.