This paper introduces Hyperbolic Rotary Positional Encoding (HoPE), a proposed approach to address the limitations of positional encoding mechanisms used to model sequential structure and long-range dependencies in Transformer models. Existing absolute positional encodings struggle with extrapolation to long sequences due to their fixed positional representations. Relative approaches, such as Alibi, exhibit poor performance in very long contexts. The widely used Rotary Positional Encoding (RoPE) struggles to model long-range dependencies reliably due to its oscillating attention patterns. HoPE, inspired by the Lorenz transform in hyperbolic geometry, addresses these issues by applying Lorenz rotations to token representations using hyperbolic functions. Theoretical analysis demonstrates that RoPE is a special case of a generalized formulation of HoPE, fundamentally resolving the oscillation problem of RoPE by enforcing a monotonic decrease in attention weights as the inter-token distance increases. Extensive experimental results, including perplexity evaluations on several extended sequence benchmarks, demonstrate that HoPE consistently outperforms existing positional encoding methods. These results highlight HoPE's enhanced ability to represent and generalize long-range dependencies. The data and code will be made public.