We point out that the fixed predictions of existing machine learning models can prevent individual outcome changes, and the existing pointwise verification method has limitations in that it relies on existing datasets and fails to predict fixed predictions of out-of-sample data. In this paper, we present a new paradigm for identifying fixed predictions by finding a limited region in the feature space where all individuals receive fixed predictions. This enables recourse verification for out-of-sample data, works without a representative dataset, and provides interpretable explanations for individuals with fixed predictions. We develop a fast method using mixed-integer quadratic constraint programming to discover the limited region of a linear classifier, and conduct comprehensive experimental studies on the limited region in various application areas. We show through experimental results that while existing pointwise verification methods fail to predict future fixed-prediction individuals, the proposed method identifies them and provides interpretable explanations.