This paper presents a novel self-attention mechanism, Multivariate Parallel Attention (MVPA), to address the challenges of learning multivariate time-series data with diverse channel configurations. MVPA separates content, temporal, and spatial attention, enabling flexible, generalizable, and efficient modeling of time-series data with different channel counts and configurations. Using MVPA, we built MVPFormer, a generative model for human electrophysiology trained to predict iEEG signal evolution across diverse subjects. We also released the SWEC iEEG dataset, the largest publicly available iEEG dataset to date, comprising nearly 10,000 hours of recordings from diverse clinical sources. Leveraging MVPA, MVPFormer achieves strong cross-subject generalization and expert-level performance on multiple iEEG tasks. It outperforms the state-of-the-art Transformer baseline model in seizure detection on the SWEC, MAYO, and FNUSA datasets, and achieves state-of-the-art performance on four Brain TreeBank iEEG decoding tasks. We also demonstrated that MVPFormer performs on par with or better than existing attention-based models on standard time-series prediction and classification tasks. MVPA is a general-purpose attention mechanism for heterogeneous time series, and MVPFormer demonstrates that it is the first open-source, open-weighted, open-data iEEG-based model with state-of-the-art clinical performance.