We conducted a study in the field of medical image segmentation to determine whether a robust, general-purpose omnimodel capable of handling diverse data can perform on par with specialized models. We compared the zero-shot performance of the state-of-the-art omnimodel (Gemini, the "Nano Banana" model) with that of a specialized deep learning model on three tasks: polyp (endoscopy), retinal vessel (fundus), and breast tumor segmentation (ultrasound). Based on the accuracy of the expert model, we selected the "easiest" and "most difficult" cases to evaluate their extreme performance. For polyp and breast tumor segmentation, the expert model outperformed the easy samples, but the omnimodel showed greater robustness on difficult samples where the expert failed. Conversely, for retinal vessel segmentation, the expert model maintained superior performance across both easy and difficult cases. Furthermore, the omnimodel demonstrated high sensitivity in identifying subtle anatomical features missed by human annotators.