This paper proposes Fair Sequence Policy Optimization (FSPO), a sequence-level reinforcement learning method for large-scale language models (LLMs). FSPO applies length-fair clipping to importance sampling (IS) weights to address the problem that conventional PPO/GRPO clipping methods, when applied to sequences, systematically reweight short and long responses, distorting the optimization direction. FSPO presents a simple method for clipping the sequence log-IS ratio into a band proportional to $\sqrt{L}$. Theoretically, we formalize length fairness through the length reweighting error (LRE) and prove that a small LRE guarantees the cosine direction between clipped and actual updates. Experimentally, we demonstrate that FSPO flattens the clipping ratio across length intervals, stabilizes training, and outperforms all baseline models on multiple evaluation datasets against the Qwen3-8B-Base model.