This paper proposes Prompt2Auto, a geometrically invariant one-shot Gaussian process (GeoGP) learning framework that enables human-guided automated robot control from a single motion prompt. To overcome the limitations of existing demonstration learning methods, which require large datasets and struggle to generalize across coordinate transformations, this paper presents a coordinate transformation-based dataset construction strategy that enhances translational, rotational, and scaling invariance and supports multi-level prediction. GeoGP is robust to changes in user motion prompts and supports multi-skill autonomy. Numerical simulations using a designed user graphical interface and two real-world robot experiments demonstrate the effectiveness of the proposed method, its cross-task generalization performance, and its significant reduction in demonstration overhead.