This study compares and evaluates hierarchical reinforcement learning (HRL) with traditional reinforcement learning (RL) for complex robot navigation tasks. We focus on the unique characteristics of HRL, particularly the role of subgoal generation and termination functions. By experimentally analyzing the differences between HRL and Proximal Policy Optimization (PPO) in RL, the method of subgoal generation in HRL, manual versus automatic subgoal generation, and the influence of termination frequency, we elucidate the advantages of HRL and its operational principles.