In this paper, we first define and introduce Brain Foundation Models (BFMs) as an innovative framework for diverse neural signal processing. They utilize large-scale pre-training techniques to effectively generalize across diverse scenarios, tasks, and modalities, and overcome the limitations of existing AI approaches. This paper provides a clear framework for building and deploying BFMs, and comprehensively reviews recent methodological innovations, new perspectives on their applications, and challenges in the field. We also highlight future directions and key challenges that need to be addressed to fully realize the potential of BFMs, including improving brain data quality, optimizing model architectures for generalization, increasing learning efficiency, and improving interpretability and robustness in real-world applications.