This paper addresses an extension of offline goal-directed reinforcement learning (GCRL) to improve performance on long horizons. To address the challenges posed by sparse rewards and discounting, which hinder learning on long horizons, the authors propose an algorithm that learns a flat (non-hierarchical) goal-conditional policy based on sub-goal conditional policies. This algorithm eliminates the need for generative models for the sub-goal space, facilitating its extension to high-dimensional goal spaces. Furthermore, we demonstrate that existing hierarchical and bootstrapping-based approaches are appropriate for specific design choices in the proposed algorithm. We demonstrate that the proposed algorithm outperforms existing GCRL algorithms on various benchmarks, demonstrating its successful application to complex, long-horizon tasks.