In this paper, we propose a novel positive sampling technique, Self-Supervised Positive Sampling (SSPS), to improve the performance of Self-Supervised Learning (SSL) in Speaker Verification (SV). Existing SSL methods have the limitation that they mainly encode recording environment information by using the same utterance of the same speaker as a positive sample. SSPS solves this problem by finding positive samples with different recording environments of the same speaker in the latent space by utilizing clustering and positive embedding memory cues. By applying SSPS to SimCLR and DINO models on the VoxCeleb1-O dataset, we achieved EER (Equal Error Rate) of 2.57% and 2.53%, respectively, which surpassed the previous best performance. In particular, SimCLR-SSPS showed similar performance to DINO-SSPS by reducing the within-speaker variance and reducing the EER by 58%.