This paper provides a comprehensive overview of Federated Learning (FL), which has emerged as an innovative paradigm in the field of distributed machine learning. Federated learning enables multiple clients, such as mobile devices, edge nodes, or organizations, to collaboratively learn a shared global model without the need to centralize sensitive data. This decentralized approach is particularly attractive in areas such as healthcare, finance, and smart IoT systems, as it addresses growing concerns about data privacy, security, and compliance. Starting from the core architecture and communication protocols of Federated Learning, we discuss key technical challenges such as the standard FL lifecycle (including local learning, model aggregation, and global updates), handling non-IID (non-independent and non-identically distributed) data, mitigating system and hardware heterogeneity, reducing communication overhead, and ensuring privacy through mechanisms such as differential privacy and secure aggregation. We also investigate emerging trends in FL research, including personalized FL, cross-device versus cross-real-time settings, integration with other paradigms such as reinforcement learning and quantum computing, summarize benchmark datasets and evaluation metrics commonly used in real-world applications and FL research, and suggest open research issues and future directions for developing scalable, efficient, and reliable FL systems.