This paper presents a novel detection technique that overcomes the limitations of existing detection methodologies to address the growing challenge of AI-generated videos. We establish a theoretical framework based on second-order dynamics analysis under Newtonian mechanics and extend the second-order central difference feature for temporal artifact detection. This approach reveals fundamental differences in the distribution of second-order features between real and AI-generated videos, and we propose a new detection method, Detection by Difference of Differences (D3), which requires no training. We validate the superiority of D3 on four open-source datasets (Gen-Video, VideoPhy, EvalCrafter, and VidProM), demonstrating a 10.39% improvement in average precision over the best-performing existing method. Furthermore, we experimentally demonstrate its computational efficiency and robustness.