This paper presents a method to accelerate the processing of the Diffusion Transformer (DiT), a cutting-edge technique in video generation. DiT suffers from slow processing speeds due to its sequential noise removal process, and existing acceleration methods suffer from performance degradation or difficulties in reusing intermediate features. By analyzing the feature evolution patterns of DiT blocks, we discover that intermediate stages exhibit high feature similarity. Based on this analysis, we propose Block-Wise Caching (BWCache), a novel acceleration technique that requires no learning. BWCache dynamically caches and reuses features in DiT blocks, minimizing unnecessary computation while maintaining visual quality through a similarity metric. Experimental results demonstrate up to a 2.24x speedup across multiple video diffusion models.