Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Defending against Stegomalware in Deep Neural Networks with Permutation Symmetry

Created by
  • Haebom

Author

Birk Torpmann-Hagen, Michael A. Riegler, P{\aa}l Halvorsen, Dag Johansen

Outline

As deep neural networks are utilized in various applications, network checkpoints are being shared and distributed to facilitate the development process. This paper addresses the threat of stegomalware, which hides malware in deep neural network checkpoints with minimal impact on network accuracy. Despite being a critical security issue, this has received little attention from deep learning researchers and security professionals. In this paper, we propose the first effective countermeasure against this attack. Specifically, we demonstrate that state-of-the-art stegomalware can be effectively neutralized by shuffling the column order of the weight and bias matrices, or the channel order of the convolutional layers. This effectively corrupts payloads embedded using state-of-the-art steganographic methods without compromising network accuracy, significantly outperforming competing methods. We also advocate ongoing research into methods to circumvent this defense, additional defenses, and the security of machine learning systems.

Takeaways, Limitations

The first effective defense against stegomalware attacks: shuffling the columns of weight and bias matrices (or shuffling the channels of convolutional layers).
Effectively compromising steganographically embedded payloads without compromising network accuracy.
Superior performance compared to competing methods
The potential for bypassing defense techniques and the need for additional defense methods, as well as research on the security of machine learning systems, are discussed.
👍