This paper studies vulnerabilities in watermarking used for detecting text generated by large-scale language models (LLMs). Specifically, we propose a model-agnostic and theoretically supported \emph{Bias-Inversion Rewriting Attack} (BIRA). BIRA weakens the watermark signal by suppressing the logit of watermarked tokens during the LLM-based rewriting process. The proposed attack achieves an evasion rate of over 99% and preserves the meaning of the text.