This paper highlights the challenges of AI in assessing and justifying information credibility, highlighting the need for a system that assists in assessing the credibility of online information. To address the lack of credibility metrics in existing search engines, we propose the TrueGL model, which assigns credibility scores and provides explanations based on IBM's Granite-1B. Fine-tuned with a custom dataset, TrueGL generates textual explanations with continuous credibility scores ranging from 0.1 to 1 through prompt engineering. Experimental results demonstrate that TrueGL outperforms other small-scale LLM and rule-based approaches in key evaluation metrics such as MAE, RMSE, and R2. Its high accuracy, broad content coverage, and ease of use contribute to increasing access to trustworthy information and reducing the spread of misinformation. The source code and model are publicly available.