This paper demonstrates that Rotary Positional Embedding (RoPE) suffers from inherent distance-dependent biases that limit its ability to model long-range contexts under practical assumptions. RoPE extension methods can mitigate this problem, but typically require post-training adjustments, such as recalibration or hyperparameter retuning. This paper proposes Token-Aware Phase Attention (TAPA), a novel position encoding method that integrates a learnable phase function into the attention mechanism. TAPA preserves long-range token interactions, scales to longer contexts with direct and lightweight fine-tuning, extrapolates to unseen lengths, and achieves significantly lower confusion in long-range contexts than the RoPE family.