Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Beyond Rate Coding: Surrogate Gradients Enable Spike Timing Learning in Spiking Neural Networks

Created by
  • Haebom

Author

Ziqiao Yu, Pengfei Sun, Dan FM Goodman

Studying Accurate Spike Timing Learning in Spiking Neural Networks

Outline

This study investigates the extent to which Spiking Neural Networks (SNNs) trained using Surrogate Gradient Descent (Surrogate GD) can learn from precise spike timing beyond firing rates. Specifically, we analyze performance differences with and without delay learning. We design synthetic tasks that separate intra-neuron inter-spike intervals and cross-neuron synchronization, and match firing counts. We construct variant datasets based on the Spiking Heidelberg Digits (SHD) and Spiking Speech Commands (SSC) datasets, removing spike count information and retaining only timing information. We demonstrate that SNNs trained with Surrogate GD perform above chance level, while purely firing-rate-based models perform at chance level. We also evaluate robustness to biologically inspired perturbations such as Gaussian jitter and spike deletion, and analyze performance degradation when temporal order is reversed. We find that SNNs trained with delay learning exhibit greater performance degradation. For the convenience of research, we have made publicly available the modified SHD and SSC datasets.

Takeaways, Limitations

SNNs trained with Surrogate GD can learn by leveraging precise spike timing information beyond firing rate.
The firing rate-based model showed probability-level performance on datasets with timing information removed.
It showed robustness against biologically inspired perturbations (Gaussian jitter, spike deletion), but performance degradation occurred depending on the type of perturbation.
Performance deteriorated significantly when temporal order was reversed, and the degradation was even greater in SNNs using delay learning. This suggests that SNNs process information in a human-like manner.
The SHD and SSC datasets used in the study were modified to improve the usability of the study.
👍