This paper proposes a novel, few-shot approach that leverages large-scale language models (LLMs) to improve relevance judgments in legal cases. Existing legal relevance judgments are time-consuming and require specialized knowledge, and suffer from a lack of interpretability in existing data. This study presents a multi-step approach that enables LLMs to generate expert-like, interpretable relevance judgments. This approach mimics the workflow of human experts, flexibly integrating expert reasoning and ensuring interpretable data labeling. Experimental results demonstrate that the proposed approach generates reliable and valid relevance assessments, allows LLMs to acquire case analysis expertise with minimal expert supervision, and enables transfer to smaller models through knowledge distillation.