We conducted research on how to prevent AI model users from using erroneous personal data to harm others. Specifically, for open-weight models, simply masking model outputs is not sufficient to prevent harmful predictions. In this study, we introduce the concept of test-time privacy and propose an algorithm that maximizes uncertainty for protected instances while maintaining accuracy for the remaining instances. This algorithm utilizes a Pareto-optimal objective that balances test-time privacy and utility, and provides a certifiable approximation algorithm that achieves the $(\varepsilon, \delta)$ guarantee without convexity assumptions. Furthermore, we prove a tight bound characterizing the privacy-utility tradeoff induced by the algorithm. Experimental results show that the proposed method achieves at least three times stronger uncertainty control than pretraining on image recognition benchmarks without compromising accuracy.