This paper proposes a time-aware single-step diffusion network (TADSR) to overcome the limitation of existing single-step real-image super-resolution (Real-ISR) methods, which fail to properly utilize the generative prior knowledge of a stable diffusion model (SD). Existing methods utilize SDs at fixed time steps, failing to fully utilize the different generative prior knowledge of SDs across different noise-injected time steps. TADSR introduces a time-aware VAE encoder to project images into different latent features at different time steps. Through dynamic changes in time steps and latent features, the learning model better aligns with the input pattern distribution of the pre-trained SD. Furthermore, the time-aware VSD loss function bridges the gap between the learning model and the SD time steps, providing consistent generative prior knowledge guidance. Consequently, TADSR achieves state-of-the-art performance and controllable super-resolution in a single step. It also offers the advantage of controlling the trade-off between fidelity and realism by varying the time step conditions.