Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

GuARD: Effective Anomaly Detection through a Text-Rich and Graph-Informed Language Model

Created by
  • Haebom

Author

Yunhe Pang, Bo Chen, Fanjin Zhang, Yanghui Rao, Evgeny Kharlamov, Jie Tang

Outline

This paper proposes GuARD, a novel model for anomaly detection in text-rich graphs. Existing large-scale language model (LLM)-based anomaly detection methods suffer from limitations, such as inability to effectively utilize textual information or failure to consider structural features of the graph. GuARD addresses these challenges by combining the structural features of graph-based methods with fine-grained semantic properties extracted from small-scale language models. It utilizes an advanced multimodal, multi-pass directive adjustment framework, optimized to integrate both textual and structural modalities. Experimental results on four datasets demonstrate superior performance, training speed, and inference speed compared to existing methods.

Takeaways, Limitations

Takeaways:
Improving anomaly detection performance in text-rich graphs: GuARD demonstrates superior anomaly detection performance compared to existing methods.
Speed up training and inference: Up to 5x speedup compared to existing LLM-based methods.
Effective combination of graph structure and text information: Effectively utilizing the structural features of graphs and the semantic properties of text.
Limitations:
Further verification of the generalization performance of the proposed model is needed.
Need to evaluate applicability to various types of graph data.
Analysis of performance changes depending on the choice of small language model used is needed.
👍