This paper proposes AulSign, a novel method utilizing large-scale language models (LLMs) to address the challenges of natural language to sign language translation and the limited data available. AulSign applies the text processing capabilities of LLMs to sign language translation by leveraging contextual learning through dynamic prompting, sample selection, and subsequent sign language association. LLMs address the lack of sign language knowledge by linking sign language to natural language descriptions. Experiments on the SignBank+ and LaCAM CNR-ISTC datasets for English and Italian demonstrate that AulSign outperforms existing best-performing models in low-data environments. This approach has the potential to improve accessibility and inclusion for underserved language communities.