Existing model-agnostic posterior explanation methods generate external explanations for opaque models primarily by locally attributing model outputs to input features. However, they lack a framework that explicitly and systematically quantifies the contributions of individual features. This paper integrates existing local attribution methods based on the Taylor expansion framework proposed by Deng et al. (2024) and presents strict assumptions for Taylor-specific attribution: precision, association, and zero-discrepancy. Building on these assumptions, we propose TaylorPODA (Taylor expansion-derived imPortance-Order aDapted Attribution), which incorporates an additional "adaptive" property. This property enables alignment with task-specific objectives, particularly in posterior settings where ground-truth explanations are lacking. Experimental evaluations demonstrate that TaylorPODA achieves competitive results compared to baseline methods and provides principled and easily visualized explanations. This study enhances the reliable distribution of opaque models by providing explanations with a stronger theoretical foundation.