This paper proposes a dual-flow generative ranking network (DFGR) to improve the efficiency of deep learning recommendation models (DLRM). Existing DLRMs rely on manual feature engineering, leading to high complexity and low scalability. Meta's HSTU-based model addressed these issues, but suffers from reduced training and inference efficiency due to increased input sequence length. DFGR addresses this issue by separating user action sequences into real and fake flows and redefining the interaction between the two flows within the QKV module of the self-attention mechanism. Experimental results demonstrate that DFGR outperforms existing recommendation models, including DLRM, Meta's HSTU, DIN, DCN, DIEN, and DeepFM, demonstrating that it is an efficient next-generation generative ranking paradigm with an optimal parameter allocation strategy under computational resource constraints.