This paper discusses quantization, a widely used compression technique for reducing the memory and computational costs of pre-trained large-scale models. In particular, selecting an appropriate scaling factor to replace weight values with values on a scaled integer grid is a key challenge in channel-wise post-training quantization (PTQ). Existing methods typically fix the scale in advance through heuristic tuning or grid search. In this paper, we propose Beacon, a simple and effective algorithm that eliminates the need for manual tuning. Beacon performs channel-wise PTQ directly using an unscaled grid and automatically determines the optimal scaling factor by leveraging the geometric properties of scalar quantization. It does not rely on backpropagation or large calibration sets. Despite its simplicity and tuning-free nature, Beacon achieves competitive performance compared to state-of-the-art methods, making it a practical solution for efficient model deployment.