This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
Consistency-based Abductive Reasoning over Perceptual Errors of Multiple Pre-trained Models in Novel Environments
Created by
Haebom
Author
Mario Leiva, Noel Ngu, Joshua Shay Kricheli, Aditya Taparia, Ransalu Senanayake, Paulo Shakarian, Nathaniel Bastian, John Corcoran, Gerardo Simari
Outline
This paper addresses the problem of performance degradation due to distributional shifts that occur when applying pre-trained recognition models to new environments. Existing metacognitive approaches use logical rules to characterize and filter model errors, but improving precision often comes at the expense of reduced recall. This paper hypothesizes that leveraging multiple pre-trained models can mitigate this recall degradation. We formulate the problem of identifying and managing conflicting predictions from different models as a consistency-based inductive inference problem, building on the concepts of adaptive learning (ABL), but applying it at test time rather than training time. Input predictions and learned error detection rules derived from each model are encoded in a logic program. We then find an inductive explanation (a subset of model predictions) that maximizes prediction coverage while maintaining the logical discrepancy rate (derived from domain constraints) below a specified threshold. We propose two algorithms for this knowledge representation task: an exact method based on integer programming (IP) and an efficient heuristic search (HS). Extensive experiments on simulated aerial imagery datasets featuring controlled, complex distributional variations demonstrate that our consistency-based inductive inference framework outperforms both individual models and standard ensemble baselines, achieving approximately 13.6% F1 score improvement and 16.6% accuracy improvement compared to the best individual model on 15 diverse test datasets. These results demonstrate that consistency-based inductive inference can be used as an effective mechanism to robustly integrate knowledge from multiple imperfect models in challenging new scenarios.
Takeaways, Limitations
•
Takeaways:
◦
We present a novel framework that integrates predictions from multiple pre-trained models through consistency-based inductive inference.
◦
Suggesting the possibility of developing a recognition model robust to distributional changes
◦
Proof of applicability to various situations through two algorithms: integer programming (IP) and heuristic search (HS).
◦
Performance improvement over existing methods on simulated aerial image datasets (13.6% improvement in F1-score, 16.6% improvement in accuracy)
•
Limitations:
◦
Further research is needed to determine the applicability of this method to real-world environments using simulated datasets.
◦
Dependency on the setting and definition of domain constraints
◦
Further research is needed on the algorithm's computational complexity and scalability.
◦
Generalizability verification is needed for various types of recognition models and datasets.