This paper presents a study on the vulnerabilities of Search-Augmented Generation (RAG) in large-scale language model (LLM)-based code generation, specifically, malicious dependency hijacking attacks. We demonstrate the potential for exploiting LLM and developer trust by injecting malicious dependencies into RAG-based code generation (RACG) using malicious documents. To achieve this, we propose a novel attack framework, called ImportSnare, which incorporates position-aware beam search to manipulate the ranking of malicious documents and multilingual inductive suggestions to manipulate the LLM to recommend malicious dependencies. We experimentally demonstrate that ImportSnare achieves a high success rate (over 50% for popular libraries such as matplotlib and seaborn) across various languages, including Python, Rust, and JavaScript, and is effective even at a low toxicity rate (0.01%). This highlights the supply chain risks of LLM-based development and suggests the need for enhanced security in code generation. Multilingual benchmarks and datasets will be made public.