GraspClutter6D is a large-scale real-world grasping dataset designed to address the problem of robust object grasping in cluttered environments for robots. It overcomes the simple scenes, low occlusion rates, and lack of diversity of existing datasets. It features 1,000 densely cluttered scenes (average 14.1 objects per scene, 62.6% occlusion rate), 200 objects and 75 environment configurations (boxes, shelves, tables), and multi-viewpoint captures using four RGB-D cameras. Rich annotations are provided, including 736K 6D object poses and 9.3B possible robot grasps for 52K RGB-D images. We benchmark existing state-of-the-art segmentation, object pose estimation, and grasp detection methods to analyze the task in cluttered environments, and demonstrate that a grasping network trained on GraspClutter6D outperforms networks trained on existing datasets in both simulations and real-world experiments. The dataset, toolkit, and annotation tools are publicly available.