In this paper, we explore how to leverage generative models to address the lack of annotated data in training medical image segmentation models for rare but clinically important medical image modalities. Focusing specifically on MRI, which lacks annotations, we present three major contributions. First, we introduce MRGen-DB, a large-scale radiology image text dataset with rich metadata including modality labels, attributes, regions, and organ information, and a subset of pixel-wise mask annotations. Second, we present MRGen, a diffusion-based data engine conditioned on text prompts and segmentation masks. MRGen generates realistic images for a variety of MRI modalities lacking mask annotations, facilitating segmentation training in areas lacking sources. Third, we demonstrate through extensive experiments on multiple modalities that MRGen significantly improves segmentation performance for unannotated modalities by providing high-quality synthetic data. This work addresses an important gap in medical image analysis by extending segmentation capabilities to scenarios where manual annotations are difficult to obtain. The code, models, and data will be made publicly available.