Hierarchical reinforcement learning (RL) has the potential to enable effective decision-making over long periods of time. Existing approaches, while promising, have yet to realize the benefits of large-scale training. This study identifies and addresses several key challenges in scaling online hierarchical RL to high-throughput environments. We propose Scalable Option Learning (SOL), a highly scalable hierarchical RL algorithm that achieves ~35x higher throughput than existing hierarchical approaches. We demonstrate the performance and scalability of SOL by training a hierarchical agent using 30 billion frames of experience in the complex game NetHack, significantly outperforming flat agents and demonstrating positive scaling trends. We also validate SOL in MiniHack and Mujoco environments, demonstrating its general applicability.