This paper presents a novel framework for sign spotting, which identifies and localizes individual sign signs in continuous sign language videos to address the data shortage problem in sign language translation. To address the lexical flexibility and ambiguity issues of existing sign spotting methods, we propose a non-training approach that integrates a large-scale language model (LLM). We extract spatiotemporal and hand features and match them against a large sign language dictionary using dynamic temporal warping and cosine similarity. We then leverage the LLM to perform context-sensitive lexical disambiguation using beam search. Experimental results on synthetic and real-world sign language datasets demonstrate improved accuracy and sentence fluency compared to existing methods.