This paper addresses the widespread use of methods to improve the efficiency of retrieval systems using low-precision calculations, which lower the numerical precision of model parameters and calculations. However, this approach often leads to excessive ties in relevance scores between queries and documents at low precision, resulting in increased variability in results and reduced evaluation reliability. To address this, the authors propose a more robust retrieval evaluation protocol designed to reduce score variability. This protocol consists of High Precision Scoring (HPS), which upscales the final score calculation step to high precision to resolve tied candidates with minimal computational cost, and a Tie-Aware Retrieval Metric (TRM), which reports the expected scores, ranges, and biases of tied candidates to quantify order uncertainty. Experiments demonstrate that HPS significantly reduces tie-induced instability, while TRM accurately recovers the expected metric values. This combination enables the construction of a more consistent and reliable evaluation system for low-precision retrieval.