This paper aims to address the loss of plasticity (the loss of learning ability when learning from long-term abnormal data) of neural networks, a critical issue in the design of continuous learning systems. We propose a method for reinitializing a portion of a network as an effective technique for preventing plasticity loss. We compare and analyze two reinitialization methods: unit reinitialization and weight reinitialization. Specifically, we propose a novel algorithm, "selective weight reinitialization," and compare it with existing unit reinitialization algorithms, continual backpropagation and ReDo. Our experimental results reveal that weight reinitialization is more effective than unit reinitialization in maintaining plasticity when the network size is small or layer normalization is included. Conversely, when the network size is sufficient and layer normalization is not included, the two methods are equally effective. In conclusion, we demonstrate that weight reinitialization is more effective than unit reinitialization in maintaining plasticity across a wider range of environments.