This paper proposes CycleDistill, a novel bootstrapping approach for building high-quality machine translation systems for low-resource languages. CycleDistill leverages a large-scale language model (LLM) and few-shot translations to iteratively generate synthetic parallel corpora from a single-language corpus, fine-tuning the model using the generated data. The parallel corpora require only 1-4 few-shot examples, and experiments on three Indian languages demonstrate that even with a single corpus, high-quality machine translation is achieved, with an average improvement of 20-30 chrF points in the first iteration compared to a few-shot baseline model. Furthermore, we investigate the effect of utilizing softmax activations during the distillation process and observe a slight improvement in translation quality.