This paper addresses the problem of safe policy learning in safety-critical multi-agent autonomous systems. Each agent must always fulfill safety requirements while simultaneously cooperating with other agents to perform tasks. To this end, we propose a hierarchical multi-agent reinforcement learning (HMARL) approach based on control barrier functions (CBFs). The proposed hierarchical approach decomposes the overall reinforcement learning problem into joint cooperative action learning at the high-level and safe individual action learning at the low-level or agent level, conditioned on high-level policies. In particular, we propose a skill-based HMARL-CBF algorithm, where the high-level problem learns a common policy for the skills of all agents, and the low-level problem learns a policy to safely execute the skills using CBFs. We validate this approach in a challenging environmental scenario where many agents must safely navigate a conflicting road network. Compared to existing state-of-the-art methods, the proposed approach significantly improves safety, achieving a near-perfect (less than 5%) success/safety rate while improving performance in all environments.