This paper presents the results of a study evaluating the performance of ten leading open and closed large-scale language models (LLMs) on the International Association of Privacy Professionals (IAPP) CIPP/US, CIPM, CIPT, and AIGP certification exams. In closed-ended exams against models from OpenAI, Anthropic, Google DeepMind, Meta, and DeepSeek, state-of-the-art models such as Gemini 2.5 Pro and OpenAI's GPT-5 surpassed the passing standards of human experts, demonstrating significant expertise in privacy law, technical controls, and AI governance. This study provides practical insights into assessing the readiness of AI tools for critical data governance roles, provides an overview for professionals navigating the intersection of AI development and regulatory risk, and establishes machine benchmarks based on human-centered assessments.