Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Explainable Recommendation with Simulated Human Feedback

Created by
  • Haebom

Author

Jiakai Tang, Jingsen Zhang, Zihang Tian, Xueyang Feng, Lei Wang, Xu Chen

Outline

To overcome the shortcomings of existing explainable recommender systems, this paper proposes a dynamic interaction optimization framework based on human-like feedback. This framework utilizes a large-scale language model (LLM) as a human simulator to predict human feedback and enhances the LLM's language understanding and logical reasoning capabilities through a user-tailored reward scoring method. Furthermore, Pareto optimization is introduced to address the trade-off between explanation quality from various perspectives, and an off-policy optimization pipeline is used to achieve efficient model learning. Experimental results demonstrate that the proposed method outperforms existing methods.

Takeaways, Limitations

Takeaways:
We present a novel framework that can improve the performance of explainable recommendation systems by leveraging human-like feedback.
Leverage large-scale language models to efficiently mimic human feedback and provide personalized explanations.
Simultaneously consider explanation quality from multiple perspectives through Pareto optimization.
Increase data utilization and improve model generalization performance through off-policy optimization.
Limitations:
It depends on the performance of LLM, and LLM bias may affect the results.
Designing a user-customized reward scoring method can be subjective.
Additional validation of generalization performance on various datasets is needed.
Computational costs may increase during the Pareto optimization process.
👍