Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Observation-Free Attacks on Online Learning to Rank

Created by
  • Haebom

Author

Sameep Chattopadhyay, Nikhil Karamchandani, Sharayu Moharir

Outline

Online Ranking Learning (OLTR) plays a crucial role in information retrieval and machine learning systems and is widely applied to search engines and content recommenders. However, despite its widespread use, there is a lack of understanding of the vulnerability of the OLTR algorithm to systematic adversarial attacks. In this study, we present a novel framework for attacking the widely used OLTR algorithm. This framework is designed to induce linear regret in the learning algorithm while ensuring that the target item set appears in the top K recommendation lists within T - o(T) rounds. We propose two novel attack strategies: CascadeOFA for CascadeUCB1 and PBMOFA for PBM-UCB. Both strategies provide theoretical guarantees that only O(log T) operations are required for success. We further complement our theoretical analysis with experimental results on real-world data.

Takeaways, Limitations

Takeaways:
A novel framework is presented to demonstrate the vulnerability of the OLTR algorithm to adversarial attacks.
Developed two new attack strategies: CascadeOFA and PBMOFA.
Provides theoretical guarantees for the success of the proposed attack strategy.
Complementing theoretical analysis with experiments on real data.
Limitations:
Attack strategy specific to specific OLTR algorithms (CascadeUCB1, PBM-UCB).
Further research is needed to determine generalizability to other OLTR algorithms.
A real-world impact assessment is needed to assess the amount of manipulation required for an attack.
👍