Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Boundary on the Table: Efficient Black-Box Decision-Based Attacks for Structured Data

Created by
  • Haebom

Author

Roie Kazoom, Yuval Ratzabi, Etamar Rothstein, Ofer Hadar

Outline

This paper addresses adversarial robustness in structured data, a field that has been underexplored compared to the visual and language domains. To this end, we propose a novel black-box, decision-based adversarial attack for tapped data. This attack efficiently explores both discrete and continuous feature spaces with minimal oracle access by combining gradient-free direction estimation and iterative boundary search. Extensive experiments demonstrate that the proposed method successfully compromises nearly the entire test set on a variety of models, from classical machine learning classifiers to large-scale language model (LLM)-based pipelines. Notably, the attack consistently achieves a success rate exceeding 90% with a small number of queries per instance. These results highlight the severe vulnerability of tapped models to adversarial perturbations and underscore the urgent need for more robust defenses in real-world decision-making systems.

Takeaways, Limitations

Takeaways:
We demonstrate that the tabular data model is vulnerable to adversarial attacks.
The proposed black box attack method achieves high success rates in various models.
It raises the need for additional defenses to ensure the robustness of the actual decision-making system.
Limitations:
The specific attack Limitations presented in this paper is not explicitly mentioned.
No discussion of defensive methods is included.
👍