This paper highlights the need for feedback mechanisms to facilitate responsible use of AI as people increasingly use AI systems in their work and daily lives, especially in settings where users lack the ability to assess the quality of AI predictions. Using a machine translation (MT) scenario, we study how monolingual users decide whether to share MT output and observe how user behavior changes with and without quality feedback. We compare and analyze explicit feedback, such as error highlighting and LLM explanations, with implicit feedback, such as backtranslation and question-and-answer (QA) tables, to assess which type of feedback is most effective in improving decision accuracy and appropriate reliance.