This paper presents a method for solving NP-complete problems using a diffusion model based on neural combinatorial optimization (NCO). To address the challenges of existing NCO methods, including their size and cross-problem generalization, and their high training costs, we propose DIFU-Ada, a framework that adapts at the inference stage without training. DIFU-Ada utilizes predefined guidance functions to enable conditional generation and zero-shot cross-problem transfer and size generalization without additional training. We understand the cross-problem transferability through theoretical analysis, and experimentally demonstrate that a diffusion solver trained solely on the traveling salesman problem (TSP) achieves competitive zero-shot transfer performance on TSP variants such as PCTSP and OP.