OmniGen2 is a multi-purpose open source generative model that provides a unified solution for various generative tasks, including text-to-image generation, image editing, and in-context generation. Unlike OmniGen v1, it features two separate decoding paths with unshared parameters and separate image tokenizers for text and image modalities. This design allows OmniGen2 to improve performance while maintaining the original text generation capability without having to re-adapt the VAE input based on existing multimodal understanding models. To facilitate the training of OmniGen2, we developed a comprehensive data construction pipeline that includes image editing and in-context generation data. In addition, we introduced a reflection mechanism tailored to image generation tasks and curated a dedicated reflection dataset based on OmniGen2. Despite its relatively small parameter size, OmniGen2 achieves competitive results on multiple task benchmarks, including text-to-image and image editing. To further evaluate the generation in context (topic-oriented tasks), we introduce a new benchmark called OmniContext, and OmniGen2 achieves state-of-the-art performance among open-source models in terms of consistency. We will release the model, training code, dataset, and data construction pipeline to support future research in this area.