[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

EASTER: Embedding Aggregation-based Heterogeneous Models Training in Vertical Federated Learning

Created by
  • Haebom

Author

Shuo Wang, Keke Gai, Jing Yu, Liehuang Zhu, Kim-Kwang Raymond Choo, Bin Xiao

Outline

In this paper, we propose a novel approach, VFedMH (Vertical Federated Learning for training Multiple Heterogeneous models), to solve the optimization convergence and generalization problems caused by heterogeneous local models among participants in vertically distributed learning (VFL). VFedMH focuses on aggregating local embeddings of each participant's knowledge during the forward propagation process. To protect the local embedding values of participants, we propose a lightweight blinding factor-based embedding protection method. The passive party injects the blinding factor into the local embeddings and transmits them to the active party, and the active party aggregates the local embeddings to obtain global knowledge embeddings, which are then transmitted to the passive party. The passive party performs forward propagation in its own local heterogeneous network using the global embeddings. Since the passive party does not own the sample labels, it cannot compute the local model gradients locally, which is overcome by the active party supporting the computation of local heterogeneous model gradients. Each participant learns its own local model using the heterogeneous model gradients, and the goal is to minimize the loss value of each local heterogeneous model. Extensive experiments demonstrate that VFedMH uses heterogeneous optimization to train multiple heterogeneous models simultaneously and outperforms some state-of-the-art methods.

Takeaways, Limitations

Takeaways:
We present a novel methodology for effectively learning heterogeneous local models in vertically distributed learning environments.
Enhancing privacy protection through embedding protection based on lightweight blinding factor.
Performance improvement through simultaneous training of various heterogeneous models.
Limitations:
There is a dependency that the active party must support the slope calculation of the passive party.
Further research is needed on the design and application of blinding factors.
Generalization performance verification is needed for various data distributions and model structures.
👍