This paper studies how to improve conversational emotion recognition (CER) using large-scale language models (LLMs). In particular, we propose various strategies for retrieving high-quality examples in context-in-context learning (ICL) (random and augmented example retrieval), and analyze the effect of conversational context on CER accuracy. Experimental results using three datasets, IEMOCAP, MELD, and EmoryNLP, show that augmented example retrieval consistently outperforms other techniques, emphasizing the importance of consistent target example retrieval and example improvement through paraphrasing.