Data sanitization is required in deep learning models to comply with privacy regulations. This is particularly problematic in federated learning (FL) environments where data is distributed across clients, making retraining or coordinated updates difficult. In this study, we present an efficient federated unlearning framework based on information theory. This framework models information leakage as a parameter estimation problem and uses quadratic Hessian information to selectively reset the parameters most sensitive to the data to be sanitized. This is followed by minimal federated retraining. The proposed model supports categorical and client-side unlearning, and the server does not need access to the client's raw data after initial information aggregation. Evaluation on benchmark datasets demonstrates robust privacy protection (MIA success rate close to random, categorical knowledge removal) and high performance (normalized accuracy ≈ 0.9 compared to the retraining benchmark). Furthermore, it effectively neutralizes malicious triggers in targeted backdoor attack scenarios, restoring model integrity.