This paper presents EmoBench-M, a new benchmark for evaluating the emotional intelligence (EI) of multimodal large-scale language models (MLLMs). To overcome the limitations of existing benchmarks, we evaluate the EI capabilities of MLLMs across three key dimensions: basic emotion recognition, conversational emotion understanding, and socially complex emotion analysis, across 13 scenarios, reflecting the multimodal complexity and dynamics of real-world interactions. EmoBench-M evaluations of open-source and closed-source MLLMs reveal significant performance gaps between MLLMs and humans, highlighting the need for further improvement in EI capabilities. All benchmark data are publicly available.