[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Exploring the In-Context Learning Capabilities of LLMs for Money Laundering Detection in Financial Graphs

Created by
  • Haebom

Author

Erfan Pirmorad

Outline

This paper explores the use of large-scale language models (LLMs) for graph-based data analytics for money laundering investigations. We propose a lightweight pipeline that extracts k-hop neighbors around entities of interest in a financial knowledge graph, transforms them into structured text, and uses few-shot in-context learning to induce the LLM to identify suspicious activity and explain why. Experimental results on synthetic anti-money laundering (AML) scenarios demonstrate that the LLM can mimic analyst-level logical reasoning to identify red flags and generate consistent explanations. While exploratory in nature, this work demonstrates the potential of LLM-based graph reasoning in the AML domain and lays the foundation for explainable and language-based financial crime analytics.

Takeaways, Limitations

Takeaways:
We present the potential of graph-based money laundering investigations using LLM.
It demonstrates that it can provide analyst-level inference and explainable results.
We present a novel approach to explainable and language-based financial crime analysis.
Limitations:
This is an exploratory study using synthetic data and requires validation on real data.
Further research is needed on the performance and reliability of LLM.
Optimization research on the size and extraction strategy of k-hop neighbor nodes is needed.
👍