A major challenge in automated theorem proving using LLM is identifying and leveraging prior mathematical knowledge and corresponding formal representations in languages like Lean. DRIFT proposes a novel framework that decomposes informal mathematical statements into smaller, more manageable subcomponents. This enables targeted retrieval of premises in math libraries like Mathlib. DRIFT also searches for explanatory theorems, allowing the model to utilize the premises more effectively. DRIFT has been evaluated on various benchmarks, including ProofNet, ConNF, and MiniF2F-test, consistently improving premise retrieval, nearly doubling the F1 score on ProofNet compared to DPR-based methods. In particular, DRIFT demonstrates strong performance on the out-of-distribution ConNF benchmark, achieving BEq+@10 improvements of 37.14% and 42.25% over GPT-4.1 and DeepSeek-V3.1, respectively.