This paper proposes Hyperbolic Rotary Positional Encoding (HoPE), a novel positional encoding method inspired by the Lorenz transform of hyperbolic geometry, to address the limitations of positional encoding mechanisms used to model sequence structure and long-range dependencies in Transformer models. While conventional Rotary Positional Encoding (RoPE) hinders the modeling of long-range dependencies due to oscillating attention patterns, HoPE overcomes this problem by applying Lorenz rotations to token representations using hyperbolic functions. Theoretical analysis demonstrates that RoPE is a special case of a generalized formulation of HoPE, and HoPE fundamentally addresses the problem of RoPE by enforcing a monotonic decrease in attention weights as the inter-token distance increases. Experimental results using various extended sequence benchmarks demonstrate that HoPE outperforms existing positional encoding methods.