This paper presents the first distributed approach to enable realistic six-degrees-of-freedom (6-DoF) manipulation of cable-suspended objects using multiple teams of Micro-Aerial Vehicles (MAVs). This study utilizes multi-agent reinforcement learning (MARL) to train an outer-loop control policy for each MAV. Unlike state-of-the-art controllers using centralized approaches, this policy does not require global state, inter-MAV communication, or neighboring MAV information. Instead, agents communicate implicitly based solely on load attitude observations, enabling high scalability and flexibility. Furthermore, it significantly reduces computational costs during inference time, enabling onboard deployment of the policy. Furthermore, we introduce a novel action space design for MAVs using linear acceleration and body rate. This choice, combined with a robust low-level controller, enables reliable sim-to-real transfer despite significant uncertainty due to cable tension during dynamic 3D motion. We validate our method in various real-world experiments, including full attitude control under load model uncertainty, demonstrating setpoint tracking performance comparable to state-of-the-art centralized methods. Additionally, we demonstrate cooperation between agents with heterogeneous control policies and robustness against complete in-flight loss of a single MAV.