This paper examines complex, multi-round negotiations using Chain of Thought (CoT) inference in a large-scale language model (LLM). It highlights the shortcomings of existing LLM agents, which overlook the functional role of emotional expression. These agents generate passive, preference-based emotional responses, making them vulnerable to manipulation and strategic exploitation by their counterparts. To address this, this paper presents EvoEmo, an evolutionary reinforcement learning framework for optimizing dynamic emotional expression in negotiations. EvoEmo models emotional state transitions as a Markov decision process and uses population-based genetic optimization to evolve emotional policies that yield high payoffs in various negotiation scenarios. Furthermore, we propose an evaluation framework for benchmarking emotion-aware negotiation using two baselines: vanilla strategies and fixed-emotion strategies. Extensive experiments and ablation studies demonstrate that EvoEmo consistently outperforms both baselines, increasing success rates, efficiency, and buyer savings. These results highlight the importance of adaptive emotional expression in enabling more effective LLM agents for multi-round negotiations.