This paper proposes Fair Sequence Policy Optimization (FSPO), a sequence-level reinforcement learning method for large-scale language models (LLMs). FSPO applies length fair clipping to importance sampling (IS) weights. We study RL methods using sequence-level IS and find that when PPO/GRPO-style clipping is applied to sequences, fixed clip ranges systematically reweight short and long responses, distorting the optimization direction. FSPO proposes a simple solution: clipping the sequence log-IS ratio into a band scaled by $\sqrt{L}$. Theoretically, we formalize length fairness through the Length Reweighting Error (LRE) and prove that a small LRE guarantees the cosine direction between clipped and actual updates. Empirically, we demonstrate that FSPO smoothes the clip ratio across length bins, stabilizes training, and outperforms baselines across model sizes and evaluation datasets, achieving the greatest benefit on the Qwen3-8B-Base model.