This paper presents OmniCache, a training-free acceleration method to address the high computational cost of Transformer architectures in diffusion model-based image and video generation. Unlike existing methods that determine caching strategies based on inter-stage similarity and focus on late-stage reuse, OmniCache strategically distributes cache reuse by comprehensively analyzing the diffusion model sampling process. This approach enables efficient cache utilization throughout the entire sampling process and dynamically estimates and removes noise during cache reuse, thereby reducing the impact on sampling direction. Experimental results demonstrate that OmniCache accelerates sampling speed while maintaining generation quality, offering a practical solution for efficient deployment of diffusion models.