This paper explores the use of large-scale language models (LLMs) for graph-based data analytics for money laundering investigations. We propose a lightweight pipeline that extracts k-hop neighbors around entities of interest in a financial knowledge graph, transforms them into structured text, and uses few-shot in-context learning to induce the LLM to identify suspicious activity and explain why. Experimental results on synthetic anti-money laundering (AML) scenarios demonstrate that the LLM can mimic analyst-level logical reasoning to identify red flags and generate consistent explanations. While exploratory in nature, this work demonstrates the potential of LLM-based graph reasoning in the AML domain and lays the foundation for explainable and language-based financial crime analytics.