Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Should I Share this Translation? Evaluating Quality Feedback for User Reliance on Machine Translation

Created by
  • Haebom

Author

Dayeon Ki, Kevin Duh, Marine Carpuat

Outline

This paper highlights the need for feedback mechanisms to facilitate responsible use of AI as people increasingly use AI systems in their work and daily lives, especially in settings where users lack the ability to assess the quality of AI predictions. Using a machine translation (MT) scenario, we study how monolingual users decide whether to share MT output and observe how user behavior changes with and without quality feedback. We compare and analyze explicit feedback, such as error highlighting and LLM explanations, with implicit feedback, such as backtranslation and question-and-answer (QA) tables, to assess which type of feedback is most effective in improving decision accuracy and appropriate reliance.

Takeaways, Limitations

Takeaways:
All feedback types except error highlighting significantly improved decision accuracy and appropriate reliance.
Implicit feedback, especially QA tables, were found to be more effective than explicit feedback in terms of decision accuracy, appropriate reliance, and user cognition.
The QA table received the highest ratings for usefulness and reliability, and the lowest ratings for mental burden.
Limitations:
Further analysis is needed to determine why error highlighting is ineffective.
Further research is needed to determine generalizability to other AI fields and user groups.
Further comparative research on different implicit feedback approaches is needed.
👍