What’s even more concerning is that GenAI models are trained on massive datasets collected from the internet, which could potentially include social bias and discrimination. Based on their writing, the models could infer attributes about students, such as race, gender, and background, which could disadvantage certain groups. These biases are much harder to detect and address than the biases of human graders. We have strategies like anonymization and moderation, but for AI, simply removing a student’s name is not enough. The biases are embedded in the model at a much deeper level, based on the training data and the patterns the model has learned.