Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

MMoE: Robust Spoiler Detection with Multi-modal Information and Domain-aware Mixture-of-Experts

Created by
  • Haebom

Author

Zinan Zeng, Sen Ye, Zijian Cai, Heng Wang, Yuhan Liu, Haokai Zhang, Minnan Luo

Outline

This paper proposes MMoE, a multimodal network for spoiler detection on online movie review websites. Unlike existing methods that focus solely on the textual content of reviews, MMoE leverages multimodal information by extracting graph, text, and meta features from the user-movie network, the textual content of reviews, and their metadata. To handle genre-specific spoiler language, MMoE adopts a Mixture-of-Experts architecture to enhance robustness, and an expert fusion layer integrates features from different perspectives for prediction. Experimental results demonstrate that MMoE outperforms the state-of-the-art methods by 2.56% and 8.41% in accuracy and F1 score, respectively, on two widely used spoiler detection datasets, demonstrating superior robustness and generalization performance. The code is available on GitHub.

Takeaways, Limitations

Takeaways:
We improved spoiler detection performance by leveraging multi-modal information (graph, text, and metadata).
We improved robustness and generalization performance against genre-specific spoilers through a Mixture-of-Experts architecture.
It achieved performance that significantly outperformed the existing top-performing models.
The code has been made public for reproducibility.
Limitations:
There may be a dependency on data from specific online movie review websites. Further research is needed to determine generalizability to other platforms.
Further research may be needed to optimize the number of experts or the structure of the Mixture-of-Experts.
There is a lack of performance evaluation on review data from diverse languages or cultural backgrounds.
👍