Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Reference Points in LLM Sentiment Analysis: The Role of Structured Context

Created by
  • Haebom

Author

Junichiro Niimi

Outline

This paper investigates the impact of the content and format of auxiliary information on the performance of large-scale language models (LLMs) in sentiment analysis. Unlike previous studies that classify sentiment solely based on review text, we draw on marketing theories such as prospect theory and expectancy-confirmation theory to emphasize the importance of considering reference points as well as actual experiences. Using a lightweight model with 3 billion parameters, we compare natural language (NL) prompts with JSON-formatted prompts. We demonstrate that JSON prompts with additional information improve performance without fine-tuning. Experiments using Yelp's restaurant and entertainment category data show a 1.6% and 4% increase in Macro-F1 scores and a 16% and 9.1% decrease in RMSE, demonstrating their potential for deployment on resource-constrained edge devices. Further analysis confirms that the performance improvement stems from contextual inference, not mere label proxies. In conclusion, we demonstrate that structured prompting can achieve competitive performance with smaller models.

Takeaways, Limitations

Takeaways:
Using structured prompting (in JSON format) can improve sentiment analysis performance for small LLMs.
Additional information enables more accurate and efficient sentiment analysis.
We present a practical alternative for realizing LLM-based sentiment analysis even in resource-constrained environments.
Expands the potential of your LLM in marketing.
Limitations:
The dataset used in the study was limited to Yelp data, so further research is needed to determine generalizability.
Since these results are for a model of a certain scale (3 billion parameters), further experiments with models of different scales are needed.
It is possible that the structured prompt design in JSON format is optimal for certain situations, and research into other forms of auxiliary information or prompt design is needed.
👍