This paper emphasizes the need for robust benchmarks for large-scale language models (LLMs) that cover both academic and industrial domains to effectively assess their real-world applicability. To this end, we present two Korean expert-level benchmarks: KMMLU-Redux, a reconstructed version of the original KMMLU that enhances its reliability, and KMMLU-Pro, which reflects Korean expert knowledge based on the Korean Professional Licensing Examination. KMMLU-Redux is constructed by removing errors from the Korean National Technical Qualification Examination, while KMMLU-Pro is based on the Korean Professional Licensing Examination. Experimental results show that these benchmarks comprehensively represent Korean industrial knowledge, and we make their corresponding datasets public.