Teaching AI Ethics: A Complete Guide for Educators
Key takeaways from Leon Furze's "Teaching AI Ethics" (2026) It's been over three years since ChatGPT was introduced. In the meantime, AI has become deeply embedded in our daily lives. Microsoft 365 has Copilot, Google Workspace has Gemini, and new iPhones have AI built into Siri. Meta has deployed its Meta AI on every platform, whether users want it or not. But as technology becomes more widespread, ethical issues become more serious. Leon Furze began his AI ethics training materials as a blog series in 2023 and updated them into a book in 2026. He directly refutes the argument that "we should stop discussing ethics because AI is already ubiquitous." His core argument is that the ubiquity and inevitability of AI make ethics training even more crucial . Let's examine each of the nine ethical areas covered in this book. 1. Bias: Three Layers of Discrimination AI bias isn't simple. It involves three layers. Data bias is the most widely known problem. Training data scraped from the internet overrepresents certain groups—typically white, male, and English-speaking. This is the "probabilistic parrot" problem, warned about by Emily Bender and Timnit Gebru. AI simply mimics patterns it sees in the data. Model bias is more subtle. Research has shown that even GPT-4o, which has undergone safety training, exhibits cultural bias when given indirect prompts. The model doesn't understand "fairness." It simply learns and sometimes amplifies patterns. Human bias creeps in during data generation and labeling. Even Fei-Fei Li, the founder of ImageNet, admitted to being shocked by the racist and sexist labels in his dataset. If you want to see for yourself how serious this problem is, try creating a "CEO photo" on Midjourney. You'll see a bunch of white men in suits. Create a "nurse photo" and you'll see images of sexually objectified women. ChatGPT and Microsoft Copilot offer more diverse results by using guardrails like system prompts. However, these are merely Band-Aids . The underlying data bias remains. 2. Environment: AI is a mining industry. To borrow Kate Crawford's phrase, AI is an **extractive technology**. Data centers consume 3-4% of the total energy consumed in the United States. The water used for cooling is literally depleting water resources in some areas. Mining of minerals like lithium and rare earth elements, as well as the short life cycles of electronic waste, are also serious issues. It's even more shocking when you look at the numbers. GPT-3 training: 1,287 MWh of electricity , approximately 552 tons of CO₂ emissions This is equivalent to the annual electricity use of 120 average American households. Energy consumption at inference time (when we actually use AI) is also significant. Image generation consumes dozens of times more energy than traditional AI tasks like text classification. As educators, we must ask ourselves: "Do we really need ChatGPT for this task, or would a more efficient tool suffice?" 3. Truth: Hallucinations are a feature, not a bug.
- ContenjooC





