This paper presents a novel framework for transductive program synthesis (TPS). Existing program synthesis methods, which focus on generalization from training data, suffer from the weakness of limited training data and test inputs containing a variety of edge cases. The proposed method improves robustness by solving the synthesis problem through active learning on a finite set of hypotheses defined by program outputs. It uses LLM to predict outputs for selected test inputs, eliminating inconsistent hypotheses, and minimizes the number of LLM queries using a greedy maximin algorithm. We demonstrate significant improvements in both accuracy and efficiency on four benchmarks: Playgol, MBPP+, 1D-ARC, and programmatic world modeling on MiniGrid. The source code is available on GitHub.