This paper presents PRISM, a foundational model pretrained on large-scale multi-sequence MRI data, to improve the generalizability of image analysis across diverse MRI sequences. We construct a large-scale multi-institutional multi-sequence MRI pretraining dataset consisting of 336,476 3D MRI scans from 34 datasets (8 public, 26 private) using 64 public and private datasets. We propose a novel pretraining method that separates anatomically invariant features from sequence-specific variations, preserving high-dimensional semantic representations. We evaluate the performance of PRISM on a benchmark consisting of 44 subtasks, including disease diagnosis, image segmentation, image registration, disease progression prediction, and report generation. PRISM outperforms existing models on 39 of these tasks, demonstrating its ability to learn robust and generalizable representations even across unknown data acquired under diverse MRI protocols.