This study developed a data extraction platform using a large-scale language model (LLM) to improve the efficiency of the essential knowledge synthesis (literature review) process in health professions education (HPE). The study compared and analyzed the extraction results of AI and human extraction from 187 existing scoping review articles and 17 extraction questions. The agreement between AI and human extraction varied by question type, with high agreement for specific and explicitly stated questions (e.g., title, objectives) and low agreement for questions requiring subjective interpretation or not explicitly stated in the text (e.g., Kirkpatrick's results, research background). AI errors were significantly lower than human errors, and most of the disagreement between AI and human extraction was due to differences in interpretation. This suggests that iterating the AI extraction process can identify complexities or ambiguities in interpretation, allowing for improvements prior to human review.