This paper provides a concise yet comprehensive overview of Federated Learning (FL), an emerging paradigm in distributed machine learning. Federated learning enables multiple clients, such as mobile devices, edge nodes, or organizations, to collaboratively train a shared global model without the need to centralize sensitive data. This distributed approach addresses growing concerns about data privacy, security, and compliance, making it particularly attractive in areas such as healthcare, finance, and smart IoT systems. Beginning with the core architecture and communication protocols of Federated Learning, the paper discusses key technical challenges, including the standard FL lifecycle (including local training, model aggregation, and global updates), handling non-independent identically distributed (IID) data, mitigating system and hardware heterogeneity, reducing communication overhead, and ensuring privacy through mechanisms such as differential privacy and secure aggregation. We also examine emerging trends in FL research, including personalized FL, device-to-device versus real-world settings, integration with other paradigms such as reinforcement learning and quantum computing, summarize benchmark datasets and evaluation metrics commonly used in real-world applications and FL research, and suggest open research issues and future directions for developing scalable, efficient, and reliable FL systems.