Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Low-Dimensional Federated Knowledge Graph Embedding via Knowledge Distillation

Created by
  • Haebom

Author

Xiaoxiong Zhang, Zhiwei Zeng, Xin Zhou, Zhiqi Shen

Outline

This paper focuses on Federated Knowledge Graph Embedding (FKGE), which collaboratively learns entity and relation embeddings from knowledge graphs (KGs) of multiple clients in a distributed environment. High-dimensional embeddings offer advantages in performance but pose challenges in terms of storage space and inference speed. Existing embedding compression methods require multiple model training runs, increasing the communication costs of FKGE. Therefore, this paper proposes FedKD, a lightweight component based on knowledge distillation (KD). FedKD allows a low-dimensional student model to mimic the triplet score distribution of a high-dimensional teacher model using the KL divergence loss during client-side local training. Unlike conventional KD, FedKD adaptively learns temperatures for positive triplet scores and adjusts negative triplet scores using predefined temperatures, mitigating the teacher's overconfidence problem. Furthermore, it dynamically adjusts the weights of the KD loss to optimize the training process. We validate the effectiveness of FedKD through extensive experiments on three datasets.

Takeaways, Limitations

Takeaways:
We present a lightweight knowledge distillation-based compression method that effectively addresses the storage and inference cost issues of high-dimensional embeddings in federated learning environments.
A novel adaptive temperature control technique is proposed to alleviate the overconfidence problem of the teacher model of Limitations of existing knowledge distillation.
Optimizing the training process of knowledge distillation loss via dynamic weight adjustment.
The effectiveness of the proposed method is verified through experiments using three datasets.
Limitations:
The effectiveness of the proposed method may be limited to specific datasets.
Generalization performance verification is needed for various FKGE models.
Further comparative analysis with other compression methods is needed.
👍