Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

A Neurosymbolic Framework for Interpretable Cognitive Attack Detection in Augmented Reality

Created by
  • Haebom

Author

Rongqian Chen, Allison Andreyev, Yanming Xiu, Mahdi Imani, Bin Li, Maria Gorlatova, Gang Tan, Tian Lan

Outline

This paper presents CADAR, a novel neurosymbolic approach for detecting cognitive attacks in augmented reality (AR) environments. CADAR uses a pre-trained visual-language model (VLM) to fuse multimodal visual-language inputs to obtain a symbolic perceptual graph representation, incorporating prior knowledge, importance weights, and temporal correlations. It then detects cognitive attacks using particle filter-based statistical inference. Unlike existing methods that focus on visual variations, which are limited to pixel- or image-level processing and lack semantic inference capabilities, or rely on pre-trained VLMs, which are black-box approaches with limited interpretability, CADAR combines the adaptability of pre-trained VLMs with the interpretability and inference rigor of particle filtering. Experimental results on an extended AR cognitive attack dataset demonstrate up to 10.7% improved accuracy over existing state-of-the-art models in challenging AR attack scenarios.

Takeaways, Limitations

Takeaways:
We improved the accuracy and interpretability of AR cognitive attack detection using a neural symbol approach.
We successfully combine the adaptability of pre-trained VLMs with the interpretability and inference rigor of particle filtering.
It represents a significant advancement in the field of AR cognitive attack detection, achieving up to 10.7% improved accuracy over existing methods.
Limitations:
Further research is needed to determine the generalization performance of the proposed method. It is necessary to determine whether the performance improvements achieved on a specific dataset are maintained on other datasets.
There is a dependency on pre-trained VLMs, which may directly reflect the limitations of VLMs.
The computational cost of particle filters may not be suitable for real-time applications.
There is a need to further evaluate the detection performance against various types of AR cognitive attacks.
👍