This paper presents Quality Diversity Inverse Reinforcement Learning (QD-IRL), a novel framework that integrates Quality Diversity (QD) optimization with Inverse Reinforcement Learning (IRL) to overcome the limitations of single-expert policy learning and learn diverse and robust behaviors. Specifically, we introduce Extrinsic Behavioral Curiosity (EBC), which provides additional curiosity rewards based on the novelty of a behavior compared to the existing behavior archive. Experiments on various robotic locomotion tasks demonstrate that EBC improves the performance of QD-IRL algorithms such as GAIL, VAIL, and DiffAIL by up to 185%, and outperforms expert performance by up to 20% in a humanoid environment. Furthermore, we demonstrate that EBC is applicable to gradient-arborescence-based QD reinforcement learning algorithms and is a general technique that significantly improves performance. The source code is available on GitHub.