This study introduces SynthSleepNet, a multimodal hybrid self-supervised learning framework for assessing sleep quality and diagnosing sleep disorders. This model analyzes various biosignals (PSG data) such as EEG, EOG, EMG, and ECG, and effectively utilizes the data by integrating mask prediction and contrastive learning. A Mamba-based temporal context module efficiently captures contextual information between signals. SynthSleepNet achieved state-of-the-art performance in sleep stage classification, apnea detection, and hypopnea detection, demonstrating robust performance even in label-limited environments.