This paper focuses on Federated Knowledge Graph Embedding (FKGE), which collaboratively learns entity and relation embeddings from knowledge graphs (KGs) of multiple clients in a distributed environment. High-dimensional embeddings offer advantages in performance but pose challenges in terms of storage space and inference speed. Existing embedding compression methods require multiple model training runs, increasing the communication costs of FKGE. Therefore, this paper proposes FedKD, a lightweight component based on knowledge distillation (KD). FedKD allows a low-dimensional student model to mimic the triplet score distribution of a high-dimensional teacher model using the KL divergence loss during client-side local training. Unlike conventional KD, FedKD adaptively learns temperatures for positive triplet scores and adjusts negative triplet scores using predefined temperatures, mitigating the teacher's overconfidence problem. Furthermore, it dynamically adjusts the weights of the KD loss to optimize the training process. We validate the effectiveness of FedKD through extensive experiments on three datasets.