This paper presents a novel, training-free sign language identification and sign spotting framework that integrates a large-scale language model (LLM) to address the data shortage problem in sign language translation. Unlike existing approaches, this study extracts global spatiotemporal and hand shape features and compares them against a large-scale sign language dictionary using dynamic time warping and cosine similarity. The LLM performs context-aware lexical interpretation via beam search without fine-tuning, mitigating noise and ambiguity arising from the matching process. Experimental results using synthetic and real sign language datasets demonstrate improvements in accuracy and sentence fluency compared to existing methods.