This paper focuses on ensuring the safety of large-scale language models (LLMs) across diverse linguistic and cultural contexts. To address the lack of comprehensive assessments and diverse data for existing multilingual LLM safety assessments, we present LinguaSafe, a multilingual safety benchmark comprising 45,000 items across 12 languages, from Hungarian to Malay. Built by combining translations, variant translations, and source data, LinguaSafe provides a multidimensional and granular assessment framework that includes direct and indirect safety assessments, as well as an additional assessment of oversensitivity. We demonstrate that safety and usability assessment results vary significantly across languages and domains, highlighting the importance of multilingual LLM safety assessment. The dataset and code are openly distributed to support further research.