This paper presents the first distributed control approach for manipulating a cable-suspended object in a real-world environment with six degrees of freedom (DOF) using multiple small airborne vehicles (MAVs). Multi-agent reinforcement learning (MARL) is used to train a high-level control policy for each MAV. Unlike existing centralized control approaches, the proposed approach does not require global state information, inter-MAV communication, or information about neighboring MAVs. Instead, the agents implicitly communicate only with the object's attitude information, providing high scalability and flexibility. Furthermore, it significantly reduces computational overhead during inference, enabling onboard deployment. A novel motion space design using linear acceleration and attitude rate, combined with a robust sub-controller, enables reliable simulation-to-real-world transfer despite significant uncertainty due to cable tension during dynamic 3D motion. This is validated through various real-world experiments, including full attitude control under load model uncertainty, demonstrating setpoint tracking performance comparable to state-of-the-art centralized methods. Furthermore, the proposed approach demonstrates cooperation between agents with heterogeneous control policies and robustness against the complete loss of a single MAV during flight.