This paper presents CADAR, a novel neurosymbolic approach for detecting cognitive attacks in augmented reality (AR) environments. CADAR uses a pre-trained visual-language model (VLM) to fuse multimodal visual-language inputs to obtain a symbolic perceptual graph representation, incorporating prior knowledge, importance weights, and temporal correlations. It then detects cognitive attacks using particle filter-based statistical inference. Unlike existing methods that focus on visual variations, which are limited to pixel- or image-level processing and lack semantic inference capabilities, or rely on pre-trained VLMs, which are black-box approaches with limited interpretability, CADAR combines the adaptability of pre-trained VLMs with the interpretability and inference rigor of particle filtering. Experimental results on an extended AR cognitive attack dataset demonstrate up to 10.7% improved accuracy over existing state-of-the-art models in challenging AR attack scenarios.