This paper highlights the problem that conventional spiking neural network (SNN) learning rules focus solely on individual spike pairs, failing to leverage the synchronized activity patterns that drive learning in biological systems. We propose a novel learning method, Spike Synchronization-Dependent Plasticity (SSDP), which adjusts synaptic weights based on the degree of synchronization of neural firing, rather than the temporal order of spikes. SSDP operates as a local, post-optimization mechanism, applying updates to a sparse subset of parameters while maintaining computational efficiency through linear scaling. Furthermore, it seamlessly integrates with standard backpropagation while preserving the forward computational graph. Experiments on a variety of datasets, from static images to high-temporal resolution tasks, demonstrate improved convergence stability and robustness to spike-time jitter and event noise, demonstrating a single-layer SNN and spiking transformers. This provides new insights into how biological neural networks can leverage synchronized activity for efficient information processing, suggesting that synchronization-dependent plasticity is a fundamental computational principle of neural learning.