Generative Flow Networks (GFlowNets) are effective for sampling diverse, high-reward objects. However, in real-world environments where new reward queries are unavailable, they must be trained from offline datasets. Existing proxy-based training methods are vulnerable to error propagation, while existing proxy-free approaches employ coarse-grained constraints that limit exploration. To address these issues, this paper proposes Trajectory-Distilled GFlowNet (TD-GFN), a novel proxy-free training framework. TD-GFN learns dense, transition-level edge rewards from offline trajectories via inverse reinforcement learning, providing rich structural guidance for efficient exploration. Crucially, to ensure robustness, these rewards are indirectly used to guide the policy through DAG pruning and prioritized backward sampling of training trajectories. This ensures that the final gradient update relies solely on the ground-truth final rewards from the dataset, preventing error propagation. Experimental results demonstrate that TD-GFN significantly outperforms a wide range of existing baselines in both convergence speed and final sample quality, establishing a more robust and efficient paradigm for offline GFlowNet training.