This study addresses the difficulty of robots generalizing from a single demonstration, focusing on the lack of a transferable and interpretable spatial representation. TReF-6 presents a method for inferring simplified 6DoF task-related frames from a single trajectory. This method identifies influence points from trajectory geometry, defines the origin of local frames, and uses these to parameterize Dynamic Movement Primitives (DMPs). The inferred frames are semantically linked through a vision-language model and localized in a new environment using Grounded-SAM, enabling functionally consistent skill generalization. We validate TReF-6 in simulations, demonstrating its robustness to trajectory noise. We deploy an end-to-end pipeline for real-world manipulation tasks, demonstrating that it supports one-shot imitation learning that preserves task intent across diverse object configurations.