This paper investigates the impact of the content and format of auxiliary information on the performance of large-scale language models (LLMs) in sentiment analysis. Unlike previous studies that classify sentiment solely based on review text, we draw on marketing theories such as prospect theory and expectancy-confirmation theory to emphasize the importance of considering reference points as well as actual experiences. Using a lightweight model with 3 billion parameters, we compare natural language (NL) prompts with JSON-formatted prompts. We demonstrate that JSON prompts with additional information improve performance without fine-tuning. Experiments using Yelp's restaurant and entertainment category data show a 1.6% and 4% increase in Macro-F1 scores and a 16% and 9.1% decrease in RMSE, demonstrating their potential for deployment on resource-constrained edge devices. Further analysis confirms that the performance improvement stems from contextual inference, not mere label proxies. In conclusion, we demonstrate that structured prompting can achieve competitive performance with smaller models.