Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

How to Retrieve Examples in In-context Learning to Improve Conversational Emotion Recognition using Large Language Models?

Created by
  • Haebom

Author

Mengqi Wang, Tiantian Feng, Shrikanth Narayanan

Outline

This paper studies how to improve conversational emotion recognition (CER) using large-scale language models (LLMs). In particular, we propose various strategies for retrieving high-quality examples in context-in-context learning (ICL) (random and augmented example retrieval), and analyze the effect of conversational context on CER accuracy. Experimental results using three datasets, IEMOCAP, MELD, and EmoryNLP, show that augmented example retrieval consistently outperforms other techniques, emphasizing the importance of consistent target example retrieval and example improvement through paraphrasing.

Takeaways, Limitations

Takeaways:
We empirically demonstrate that augmented example retrieval in context-based learning is effective in improving conversational emotion recognition performance.
Presents the importance of consistent goal examples and the effectiveness of improving examples through paraphrasing.
Increased generalizability of results through experiments on various datasets (IEMOCAP, MELD, EmoryNLP).
Limitations:
The study was limited to a specific type of example search strategy. Additional research on other types of strategies may be needed.
Results may not be generalizable depending on the characteristics of the dataset used. Further research using more diverse and larger datasets is needed.
There may be a lack of detailed explanation of the specific details and limitations of how augmented examples are generated.
👍