This paper proposes AdEval, a dynamic data evaluation method, to address data contamination in large-scale language model (LLM) evaluations. AdEval reduces the risk of data contamination by extracting knowledge points and key ideas from static datasets and dynamically aligning them with the core content of static benchmarks. It obtains background information through online searches to generate detailed explanations of knowledge points and designs questions across six dimensions (remembering, understanding, applying, analyzing, evaluating, and creating) based on Bloom's cognitive hierarchy, enabling multi-level cognitive evaluations. It controls the complexity of dynamically generated datasets through iterative question restructuring. Experimental results on multiple datasets demonstrate that AdEval effectively mitigates the impact of data contamination, addresses the lack of complexity control and single-dimensional evaluation issues, and enhances the fairness, reliability, and diversity of LLM evaluations.