This paper proposes ACE, a novel method to address the vulnerability of deep neural networks to spurious correlations. Existing research has focused on imperfect spurious correlations, using labeled instances to break the correlation. However, for complete spurious correlations, correct generalization is fundamentally underspecified. ACE addresses this underspecified problem by learning a set of concepts that are consistent with the training data but make different predictions for a subset of new, unlabeled inputs. Using a self-training approach that encourages confident and selective mismatching, ACE performs on par with or better than existing methods on various complete spurious correlation benchmarks and is robust to imperfect spurious correlations. Furthermore, ACE is more configurable than existing methods, directly encoding prior knowledge and enabling principled unsupervised model selection. In initial applications to language model alignment, ACE achieved competitive performance on measurement manipulation detection benchmarks without access to unreliable measurements.