This paper presents a novel approach to develop expert systems in a controlled and transparent manner using Large Language Models (LLMs). We generate a symbolic representation of knowledge in Prolog using a domain-specific, well-structured, prompt-based extraction method, which can be verified and modified by experts. This results in an expert system that guarantees interpretability, scalability, and reliability. Quantitative and qualitative experiments using Claude Sonnet 3.7 and GPT-4.1 demonstrate the factual accuracy and semantic consistency of the generated knowledge base. We present a transparent hybrid solution that combines the reproducibility of LLMs with the accuracy of symbolic systems, paving the way for reliable AI applications in sensitive domains.