This paper proposes a novel method to improve the inference speed of Diffusion Transformers (DiTs). Conventional TaylorSeer caches intermediate features of all transformer blocks and predicts future features through Taylor expansion. However, it suffers from significant memory and computational overhead and fails to consider prediction accuracy. In this paper, we reduce the number of cached features by shifting the Taylor prediction target to the last block and propose a dynamic caching mechanism based on the prediction error of the first block. This improves the trade-off between speed and quality, achieving inference speed increases of 3.17x, 2.36x, and 4.14x for FLUX, DiT, and Wan Video, respectively.