This paper examines red teaming activities to effectively detect potential risks in AI models. We point out that existing automated red teaming approaches fail to account for human backgrounds and identities, and propose PersonaTeaming, a novel method for exploring diverse adversarial strategies using personas. We develop a methodology for modifying prompts based on personas, such as "red team expert" or "general AI user," and an algorithm for automatically generating various persona types. We also propose a new metric for measuring the diversity of adversarial prompts. Experimental results show that PersonaTeaming improves attack success rates by up to 144.1% compared to the existing state-of-the-art method, RainbowPlus. We discuss the pros and cons of various persona types and modification methods, and suggest future research directions for exploring the complementarity between automated and human red teaming approaches.