Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

FIT-Print: Towards False-claim-resistant Model Ownership Verification via Targeted Fingerprint

Created by
  • Haebom

Author

Shuo Shao, Haozhe Zhu, Yiming Li, Hongwei Yao, Tianwei Zhang, Zhan Qin

Outline

This paper identifies vulnerabilities in model fingerprinting techniques for protecting the intellectual property rights of open-source models and proposes a novel approach to address them. We demonstrate that existing fingerprinting techniques, due to their untargeted comparison method, are vulnerable to false claim attacks, where attackers falsely claim a model as their own. Therefore, we propose FIT-Print, a targeted fingerprinting paradigm, and develop two black-box model fingerprinting techniques, FIT-ModelDiff and FIT-LIME, which utilize the distance between model outputs and the feature importance of specific samples. Experimental results demonstrate that the proposed method is more robust and effective against false claim attacks than existing methods.

Takeaways, Limitations

Takeaways:
Reveals the vulnerability of existing model fingerprinting techniques to false claim attacks.
We propose a targeted fingerprinting paradigm (FIT-Print) and new black-box model fingerprinting techniques (FIT-ModelDiff, FIT-LIME).
Development of a robust and effective model fingerprinting technique against false claim attacks.
Presenting a new approach to intellectual property protection in the open source model.
Limitations:
The performance of the proposed method may vary depending on the model and dataset used.
Further research is needed on applicability and scalability in real-world environments.
Further evaluation of resistance to various types of attacks is needed.
👍