This paper studies the problem of posterior probability distribution sampling for score-based generative models. Sampling is performed using the prior probability $p(x)$, the measurement model $p(y|x)$, and the posterior probability $p(x|y)$. Exact posterior probability sampling from the KL divergence perspective is known to be computationally challenging. Rather than exploring the possibility of exact sampling, this paper approaches the problem of "tilting" the distribution toward the measurements. With minimal assumptions, we show that it is possible to simultaneously sample from a distribution that approximates the posterior probability of a noisy prior under KL divergence and the true posterior probability under Fisher divergence. This ensures that the resulting sample matches both the measurements and the prior probability. This paper presents the first formal result for (approximate) posterior probability sampling in polynomial time.