Diacritization of Arabic text remains a persistent challenge in natural language processing due to the rich morphological characteristics of the language. In this paper, we present Sadeed, a decoder-only language model fine-tuned on the Kuwain 1.5B Hennara et al. [2025], a compact model trained on a diverse Arabic corpus. Sadeed is fine-tuned on a dataset containing carefully selected, high-quality diacritized texts generated through rigorous data cleaning and normalization processes. Despite using fewer computational resources, Sadeed achieves competitive results compared to proprietary large-scale language models and outperforms existing models trained in similar domains. Furthermore, this paper highlights key shortcomings in current benchmarking practices for Arabic diacritization. To address these issues, we introduce SadeedDiac-25, a novel benchmark designed to enable more fair and comprehensive evaluation across a variety of text genres and complexity levels. Sadeed and SadeedDiac-25 provide a solid foundation for advancing Arabic NLP applications, including machine translation, speech synthesis, and language learning tools.