Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Foe for Fraud: Transferable Adversarial Attacks in Credit Card Fraud Detection

Created by
  • Haebom

Author

Jan Lum Fok, Qingwen Zeng, Shiping Chen, Oscar Fawkes, Huaming Chen

Outline

This paper examines the robustness of machine learning (ML) models for credit card fraud detection (CCFD). Specifically, we investigate the impact of adversarial attacks on tabular credit card transaction data. While previous research has explored adversarial attacks on image data, research on tabular data in CCFD has been limited. In this paper, we employ a gradient-based adversarial attack method to attack tabular data in both black-box and white-box settings and analyze the results. Experimental results demonstrate that tabular data is vulnerable to even subtle perturbations, and that adversarial examples generated using gradient-based attacks are effective against non-gradient-based models as well. This highlights the need for developing robust defense mechanisms for CCFD algorithms.

Takeaways, Limitations

Takeaways:
By demonstrating that tabular credit card transaction data is vulnerable to adversarial attacks, we raise awareness among financial technology professionals about the security and reliability of ML models.
We demonstrate that gradient-based attacks are effective against other types of models, highlighting the need for developing diverse defense mechanisms.
We present research directions for strengthening the security of the CCFD system.
Limitations:
Limited to research on specific slope-based attack methods, research on other types of adversarial attacks is needed.
Further research is needed to determine its applicability and effectiveness in real-world settings.
It does not include suggestions or evaluations of defensive techniques.
👍