This paper proposes PLLM (pronounced "plum"), a novel approach that leverages large-scale language models (LLMs) to address Python dependency issues. PLLM iteratively infers missing or incorrect dependencies using a search-augmented generation (RAG) approach. LLM builds a test environment that improves predictions by suggesting module combinations, observing execution feedback, and parsing error messages using natural language processing (NLP). We evaluated PLLM using the Gistable HG2.9K dataset, and Gemma-2 9B using RAG achieved the best performance. PLLM achieved a significantly higher fix rate than existing solutions such as PyEGo and ReadPyE.