This paper analyzes the applicability of large-scale language models (LLMs) to 2,000 under-resourced African languages. We compare and analyze six LLMs, eight small-scale language models (SLMs), and six specialized SLMs (SSLMs) to assess the current state of African language support, training datasets, technical limitations, script issues, and language modeling roadmaps. Our analysis shows that while 42 African languages are supported and 23 public datasets exist, more than 98% of African languages are not yet supported, and only four languages (Amharic, Swahili, Afrikaans, and Malagasy) are primarily processed. In addition, we show that only Latin, Arabic, and Ge’ez scripts are recognized, while 20 active scripts are ignored. Major challenges include data shortage, tokenization bias, high computational cost, and evaluation issues.