In this paper, we propose a novel architecture, the Latent Program Network (LPN). LPN combines the generalization power of program synthesis with the scalability of deep learning by incorporating direct search into the neural model at test time. To address the combinatorial explosion problem of existing program synthesis methods and the lack of test-time adaptability of deep learning methods, LPN learns a latent space of implicit programs that maps inputs to outputs, and searches this space using gradients at test time. It demonstrates comparable or superior performance to existing methods on a variety of programming-by-examples tasks without requiring a predefined domain-specific language, and demonstrates its ability to learn and search the latent program space to adapt to new tasks on the ARC-AGI benchmark. Enabling test-time search improves performance by a factor of two on out-of-distribution tasks.