To bridge the gap between the dynamic nature of real-world legal practice and static benchmarks, this paper introduces J1-ENVS, the first interactive dynamic legal environment for LLM-based agents. It consists of six representative scenarios from Chinese legal practice across three levels of environmental complexity, guided by legal experts. We also present J1-EVAL, a fine-grained evaluation framework designed to assess task performance and procedural compliance across different levels of legal proficiency. Extensive experiments on 17 LLM agents show that many models demonstrate robust legal knowledge but struggle with procedural execution in dynamic environments. Even the state-of-the-art model, GPT-4o, falls short of 60% overall performance. These results highlight ongoing challenges in achieving dynamic legal intelligence and provide valuable insights for future research.