This paper presents our approach for SemEval 2025 Task 11 Track A (Multi-Label Sentiment Classification across 28 Languages). We explore two main strategies: fully fine-tuning a Transformer model and classifier-only training, and evaluate different settings including fine-tuning strategy, model architecture, loss function, encoder, and classifier. We find that training a classifier on top of a prompt-based encoder such as mE5 and BGE yields significantly better results than fully fine-tuning XLMR and mBERT. The best-performing model in the final leaderboard is an ensemble combining multiple BGE models, using CatBoost as a classifier, and applying different configurations. This ensemble achieves an average F1-macro score of 56.58 across all languages.