This paper systematically reviews the early studies of self-supervised learning-based EEG basic models (EEG-FMs) that process electroencephalography (EEG) data. By analyzing 10 early EEG-FMs, we find that most of them use Transformer-based sequence modeling and masked sequence reconstruction as self-supervised learning methods. However, we point out that it is difficult to evaluate the practical applicability due to the heterogeneity and limited aspects of model evaluation.