Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

A foundation model with multi-variate parallel attention to generate neuronal activity

Created by
  • Haebom

Author

Francesco Carzaniga, Michael Hersche, Abu Sebastian, Kaspar Schindler, Abbas Rahimi

Outline

This paper presents a novel self-attention mechanism, Multivariate Parallel Attention (MVPA), to address the challenges of learning multivariate time-series data with diverse channel configurations. MVPA separates content, temporal, and spatial attention, enabling flexible, generalizable, and efficient modeling of time-series data with different channel counts and configurations. Using MVPA, we built MVPFormer, a generative model for human electrophysiology trained to predict iEEG signal evolution across diverse subjects. We also released the SWEC iEEG dataset, the largest publicly available iEEG dataset to date, comprising nearly 10,000 hours of recordings from diverse clinical sources. Leveraging MVPA, MVPFormer achieves strong cross-subject generalization and expert-level performance on multiple iEEG tasks. It outperforms the state-of-the-art Transformer baseline model in seizure detection on the SWEC, MAYO, and FNUSA datasets, and achieves state-of-the-art performance on four Brain TreeBank iEEG decoding tasks. We also demonstrated that MVPFormer performs on par with or better than existing attention-based models on standard time-series prediction and classification tasks. MVPA is a general-purpose attention mechanism for heterogeneous time series, and MVPFormer demonstrates that it is the first open-source, open-weighted, open-data iEEG-based model with state-of-the-art clinical performance.

Takeaways, Limitations

Takeaways:
A novel self-attention mechanism (MVPA) is presented that is effective for multivariate time series data with diverse channel configurations.
Development and release of MVPFormer, a generative model for predicting iEEG signal evolution in various subjects.
Achieving state-of-the-art performance in seizure detection and iEEG decoding tasks.
Enabling Research by Making the Large-Scale Open iEEG Dataset (SWEC) Public
Verification of the versatility of MVPA in various time-series tasks
Limitations:
There is a possibility that the performance of MVPA may be biased towards specific datasets or tasks (further verification of generalization performance is required).
Despite the diversity of the dataset, it may not fully reflect the diversity of real-world clinical settings.
Lack of detailed analysis of MVPFormer's computational cost and training time.
👍