This paper proposes Training with Explanations Alone (TEA), a novel learning paradigm, to address the problem of shortcut learning, which hinders the application of artificial intelligence (AI) in critical areas such as healthcare. TEA trains a classifier (a TEA student model) using the explanation heatmap of a teacher model as a target heatmap. This ensures that the TEA student model focuses on the same image features as the teacher model. Furthermore, if the teacher model is trained to ignore background bias, such as by removing the background, the student model is also trained to ignore background bias. By using multiple teacher models, the student model can be trained to be highly resistant to foreground bias, and surprisingly, it can achieve results that are consistent with the teacher model's output without applying a loss function to the student model's output. Comparing with 14 state-of-the-art methods on five datasets with strong background or foreground bias (including the Waterbirds and Xline datasets for COVID-19/pneumonia classification), the TEA student model demonstrates excellent resistance to bias, outperforms the state-of-the-art methods, and generalizes well to untrained hospital data.