Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Evaluating Trust in AI, Human, and Co-produced Feedback Among Undergraduate Students

Created by
  • Haebom

Author

Audrey Zhang, Yifei Gao, Wannapon Suraworachet, Tanya Nazaretsky, Mutlu Cukurova

Outline

This study investigated students' perceptions of generative AI feedback based on large-scale language models (LLMs) in a higher education setting, providing valuable insights for effective implementation and adoption. Through an experimental design involving 91 undergraduate students, we compared and analyzed students' trust in LLM-generated, human-generated, and human-AI co-generated feedback. Specifically, we examined factors influencing feedback type identification, perceptions of feedback quality, and potential biases related to the feedback source. Results showed that when the feedback source was hidden, students tended to prefer AI-generated and co-generated feedback over human feedback in terms of usefulness and objectivity. However, when the feedback source was revealed, a stronger negative bias toward AI emerged. Interestingly, the decreased authenticity when the feedback source was revealed was limited to AI feedback, while co-generated feedback maintained positive perceptions. Experience with educational AI improved students' ability to identify LLM-generated feedback and increased their trust in all types of feedback. Conversely, students with extensive experience using AI for general purposes tended to rate the feedback as less useful and trustworthy. These results suggest the importance of improving feedback literacy and AI literacy for the reliability of feedback sources and the adoption and educational impact of AI-based feedback.

Takeaways, Limitations

Takeaways:
Students' perceptions of AI feedback are largely influenced by whether the source of the feedback is disclosed.
Co-generated feedback can be presented as an alternative to compensate for the shortcomings of AI feedback (low authenticity when sources are disclosed).
There is a need to mitigate students' bias toward AI feedback through AI literacy and feedback literacy education.
Students' experience using AI influences their perception of AI feedback.
Limitations:
This study's findings may be limited to a specific educational environment and student population. Caution is needed in generalizing the findings.
There is a lack of objective evaluation criteria for qualitative differences in feedback.
When comparing human and AI feedback, it is necessary to examine whether the qualitative level of human feedback is maintained consistently.
Further research is needed, including more diverse types of AI feedback and a wider range of student populations.
👍