This paper proposes three novel node unlearning methods to efficiently remove sensitive training data from graph neural network (GNN) models and reduce privacy risks. To address the limitations of existing methods, including limitations in the GNN architecture, insufficient utilization of graph topology, and trade-offs between performance and complexity, we propose three methods: Class-based Label Replacement, Topology-guided Neighbor Mean Posterior Probability, and Class-consistent Neighbor Node Filtering. Specifically, Topology-guided Neighbor Mean Posterior Probability and Class-consistent Neighbor Node Filtering utilize topological features of the graph to perform effective node unlearning. We evaluate the performance of these three methods on three benchmark datasets based on model utility, unlearning utility, and unlearning efficiency, and confirm that they outperform existing methods. This research contributes to improving the privacy and security of GNN models and provides valuable insights into the field of node unlearning.