This paper proposes SEDEG, a novel method that uses an encoder-decoder architecture to enhance knowledge generalization in incremental learning. SEDEG involves a two-stage training process. In the first stage, an ensemble encoder is trained using feature boosting to learn generalized representations, thereby improving the decoder's generalization ability and the balance between the classifier and the decoder's generalization ability. In the second stage, a knowledge distillation (KD) strategy is used to compress the ensemble encoder and develop a new, more generalized encoder. Effective knowledge transfer is achieved through the balanced KD approach and feature KD. Extensive experiments on three benchmark datasets demonstrate the superior performance of SEDEG, and further experiments confirm the effectiveness of each component.