Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Leveraging Large Language Models for Accurate Sign Language Translation in Low-Resource Scenarios

Created by
  • Haebom

Author

Luana Bulla, Gabriele Tuccio, Misael Mongiov i, Aldo Gangemi

Outline

This paper proposes AulSign, a novel method utilizing large-scale language models (LLMs) to address the challenges of natural language to sign language translation and the limited data available. AulSign applies the text processing capabilities of LLMs to sign language translation by leveraging contextual learning through dynamic prompting, sample selection, and subsequent sign language association. LLMs address the lack of sign language knowledge by linking sign language to natural language descriptions. Experiments on the SignBank+ and LaCAM CNR-ISTC datasets for English and Italian demonstrate that AulSign outperforms existing best-performing models in low-data environments. This approach has the potential to improve accessibility and inclusion for underserved language communities.

Takeaways, Limitations

Takeaways:
LLM presents new possibilities for sign language translation.
Presenting an effective way to solve the problem of data shortage.
Contribute to improving accessibility and inclusion for marginalized language communities
Achieve better performance than existing models in low-data environments.
Limitations:
Further research is needed on the scale and diversity of the datasets used.
Possibility of not fully reflecting the visual and spatial characteristics of sign language
Generalization performance needs to be verified for various sign languages.
The difficulty of accurately mapping natural language descriptions to sign language.
👍