The BUSTED team achieved fifth place in the AraGenEval shared task on Arabic AI-generated text detection. They investigated the effectiveness of three pre-trained transformer models—AraELECTRA, CAMeLBERT, and XLM-RoBERTa—and fine-tuned each model on a binary classification task on the provided dataset. The multilingual XLM-RoBERTa model achieved the highest performance with an F1 score of 0.7701, outperforming specialized Arabic models.