This paper addresses the challenges of existing approaches to training reinforcement learning (RL) policies by reparameterizing human motion to teach complex skills. Specifically, we address the challenges posed by physical differences between humans and robots and the neglect of human-object and human-environment interactions, which are essential for expressive locomotion and loco-manipulation. OmniRetarget is a data generation engine based on interaction meshes that explicitly models and preserves spatial and contact relationships between agents, terrain, and manipulated objects. By minimizing Laplacian deformations and enforcing kinematic constraints, OmniRetarget generates kinematically feasible trajectories. Furthermore, by preserving task-related interactions, it enables efficient data expansion across diverse robot body, terrain, and object configurations from a single demonstration. By reparameterizing motions from OMOMO, LAFAN1, and our own motion capture datasets, we generate over 8 hours of trajectories, achieving better kinematic constraint satisfaction and contact preservation than widely used baselines. With this high-quality data, our proprioceptive RL policy successfully performs long-term parkour and loco-manipulation tasks for up to 30 seconds on a Unitree G1 humanoid, with only five reward terms and a simple domain randomization shared across all tasks.