This paper proposes a novel framework for explaining AI system decisions in uncertain environments characterized by incomplete information and probabilistic models. We focus on generating two types of explanations: monolithic explanations and model reconciling explanations. Monolithic explanations provide their own justifications for the explanandum, while model reconciling explanations consider the user's knowledge of the explanation recipient. Monolithic explanations incorporate uncertainty by increasing the probability of the explanandum using probabilistic logic. Model reconciling explanations extend a logic-based variant of the model reconciliation problem to consider probabilistic human models, seeking an explanation that minimizes the conflict between the explanation and the probabilistic human model while increasing the probability of the explanandum. To evaluate explanation quality, we present quantitative metrics of explanatory gain and explanatory power. We also present an algorithm that efficiently computes explanations by exploiting the duality between minimal correction sets and minimal unsatisfiable sets. Extensive experimental evaluations on various benchmarks demonstrate the efficiency and scalability of our approach for generating explanations under uncertainty.