Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

CoreGuard: Safeguarding Foundational Capabilities of LLMs Against Model Stealing in Edge Deployment

Created by
  • Haebom

Author

Qinfeng Li, Tianyue Luo, Xuhong Zhang, Yangfan Xie, Zhiqiang Shen, Lijun Zhang, Yier Jin, Hao Peng, Xinkui Zhao, Xianwei Zhu, Jianwei Yin

Outline

CoreGuard is a computationally and communication-efficient method for protecting large-scale language models (LLMs) deployed on edge devices. Proprietary LLMs demonstrate strong generalization across a variety of tasks and are increasingly deployed on edge devices for efficiency and privacy. However, if deployed on the edge without proper protection, attackers can extract model weights and architecture, allowing unauthorized replication and misuse. CoreGuard reduces computational and communication overhead by utilizing efficient protection and propagation protocols.

Takeaways, Limitations

Takeaways:
Provides high-level security for LLM deployed on edge devices.
It is suitable for edge environments because it has very low computational and communication overhead.
Limitations:
The summary information of the paper alone does not provide detailed information about the specific implementation method or performance.
A quantitative assessment of the security level may not be provided.
Comparative analysis with other protection methods may not be sufficient.
👍