GraspClutter6D is a large-scale real-world grasp dataset designed to address the problem of robust object grasping in complex environments for robots. To overcome the simple scenes and lack of diversity of existing datasets, it contains 1,000 dense (14.1 objects/scene, 62.6% occlusion) complex scenes, 52,000 RGB-D images of 200 objects captured from various angles in 75 different environmental configurations (boxes, shelves, and tables), 736,000 6D object poses, and 9.3 billion possible robot grasps. In this paper, we use this dataset to evaluate the performance of state-of-the-art segmentation, object pose estimation, and grasp detection methods, and demonstrate that grasp networks trained on GraspClutter6D outperform networks trained on existing datasets in both simulations and field experiments. The dataset, toolkit, and annotation tools are publicly available.