This paper introduces "FakeParts," a new type of deepfakes that alter specific spatial regions or temporal intervals of genuine videos through subtle, localized manipulations. Unlike fully synthesized content, partial manipulations, such as facial expression changes, object replacements, and background modifications, blend seamlessly with real elements, making them particularly deceptive and difficult to detect. To address this critical gap in detection performance, this paper presents "FakePartsBench," the first large-scale benchmark dataset specifically designed to capture the full spectrum of partial deepfakes. Comprising over 25,000 videos with pixel- and frame-level manipulation annotations, this dataset enables a comprehensive evaluation of detection methods. User studies demonstrate that FakeParts reduces human detection accuracy by over 30% compared to existing deepfakes, with similar degradation observed with state-of-the-art detection models. This research exposes critical vulnerabilities in current deepfake detection methods and provides a resource for developing more robust methods against partial video manipulation.