This study investigates the extent to which Spiking Neural Networks (SNNs) trained using Surrogate Gradient Descent (Surrogate GD) can learn from precise spike timing beyond firing rates. Specifically, we analyze performance differences with and without delay learning. We design synthetic tasks that separate intra-neuron inter-spike intervals and cross-neuron synchronization, and match firing counts. We construct variant datasets based on the Spiking Heidelberg Digits (SHD) and Spiking Speech Commands (SSC) datasets, removing spike count information and retaining only timing information. We demonstrate that SNNs trained with Surrogate GD perform above chance level, while purely firing-rate-based models perform at chance level. We also evaluate robustness to biologically inspired perturbations such as Gaussian jitter and spike deletion, and analyze performance degradation when temporal order is reversed. We find that SNNs trained with delay learning exhibit greater performance degradation. For the convenience of research, we have made publicly available the modified SHD and SSC datasets.