This paper presents BM-MAE, a novel pre-training strategy specialized for multimodal magnetic resonance imaging (MRI) data. Existing multimodal MRI analysis methods are designed under the assumption that all modalities are always available, making them vulnerable to modality loss issues encountered in real-world clinical settings. BM-MAE, based on Masked Image Modeling (MIM), is designed to allow a single pre-trained model to adaptively operate regardless of the available modality combinations. This allows for the benefits of a pre-trained model across all modalities, even when fine-tuning with a subset of modalities. Experimental results demonstrate that BM-MAE outperforms or even surpasses existing methods that perform separate pre-training for each modality combination, and significantly outperforms learning from scratch across multiple downstream tasks. Furthermore, it demonstrates the ability to efficiently reconstruct missing modalities.