To address the problem of large-scale language models (LLMs) suffering from severe performance degradation in ultra-low-bit (<2-bit) quantization, this paper proposes PTQ1.61, a novel ultra-low-bit post-training quantization (PTQ) method that enables 1.61-bit weight quantization. While existing methods use more than 1 extra bit per weight, PTQ1.61 introduces a one-dimensional structured mask based on input activations that uses only a negligible 0.0002-bit extra bit, allocating 4 bits to important weight channels, and performs binarization on non-important channels via a block-wise scaling factor optimization framework. Furthermore, we present a novel quantization preprocessing paradigm that alleviates the difficulties of ultra-low-bit channel-specific PTQ by transforming the weight distribution of a pre-trained model before quantization. Experimental results demonstrate that PTQ1.61 achieves state-of-the-art performance in ultra-low-bit quantization.