To address the limitations of deep learning-based speaker authentication systems, which rely heavily on access to large, diverse speaker data sets, this paper proposes INSIDE (Interpolating Speaker Identities in Embedding Space), a novel data augmentation method that synthesizes new speaker IDs by interpolating between existing speaker embeddings. INSIDE selects pairs of nearby speaker embeddings from a pre-trained speaker embedding space and computes an intermediate embedding using spherical linear interpolation. These interpolated embeddings are fed into a speech synthesis system to generate corresponding speech waveforms. The resulting data is then combined with the original dataset to train submodels. Experimental results demonstrate that models trained with INSIDE-augmented data outperform models trained solely on real data, achieving relative performance gains of 3.06% to 5.24% on speaker authentication. Gender classification also demonstrates a 13.44% relative performance gain. INSIDE is compatible with other augmentation techniques, making it a flexible and scalable addition to existing training pipelines.