This paper proposes a watermarking technique to address the reliability issues of text generated by large-scale language models (LLMs). Existing watermarking approaches struggle to simultaneously meet the three requirements of maintaining text quality, model-independent detection, and message embedding capacity. To address this, this paper proposes BiMark, a novel watermarking framework that satisfies these requirements through three innovative elements. First, it employs a bit-flip unbiased reweighting mechanism that enables model-independent detection. Second, it employs a multi-layer architecture that improves detectability without compromising generation quality. Third, it employs an information encoding scheme that supports multi-bit watermarking. Experimental results show that BiMark achieves up to 30% higher extraction rates than existing multi-bit watermarking approaches while maintaining text quality with low perplexity. It also performs similarly to unwatermarked text in subsequent tasks such as summarization and translation.