Diffusion models are widely used for image and video generation, but the iterative generation process is slow and expensive. Existing distillation methods have demonstrated the potential for one-step generation in the image domain, but still suffer from significant quality degradation. In this study, we propose adversarial post-training (APT) on real data based on diffusion pre-training for one-step video generation. To improve training stability and quality, we introduce several improvements to the model architecture and training procedure, as well as an approximate R1 regularization objective. Experimental results demonstrate that Seaweed-APT, an adversarial post-training model, can generate a 2-second, 1280x720, 24fps video in real time in a single forward evaluation step. Furthermore, this model generates 1024px images in a single step, achieving quality comparable to state-of-the-art methods.