This paper presents Subject Fidelity Optimization (SFO), a novel comparative learning framework for zero-shot subject-driven generation. SFO overcomes the limitations of supervised learning methods that rely solely on positive objects by introducing additional synthetic negative objects, thereby inducing the model to prefer positive objects over negative ones. To achieve this, we propose Condition-Degradation Negative Sampling (CDNS), which generates synthetic negatives cost-effectively while maintaining high-quality subject-related details and textual alignment. Furthermore, we recalibrate the diffusion time step to focus on intermediate stages where subject details emerge. SFO and CDNS are shown to significantly outperform existing strong baselines in both subject fidelity and textual alignment on subject-driven generation benchmarks.