We introduce a Y-shaped generative flow, which operates by moving probability masses together along a shared path and then branching to a target-specific endpoint. This paper builds on a novel velocity-based objective function with a sublinear exponent (between 0 and 1) that compensates for joint and rapid mass movement. We implement this as a scalable neural ODE training objective, recognizing hierarchical structures, improving distribution metrics compared to robust flow-based baselines, and reaching the goal with fewer integration steps.