This paper presents a new benchmark dataset, MapIQ, to expand the research on visual data understanding of multimodal large-scale language models (MLLMs), particularly in Map-VQA (Map-VQA). This dataset encompasses three map types (Choropleth, Cartogram, and Proportional Symbol Maps) and six topics, and evaluates the performance of several MLLMs on six visual analysis tasks. Furthermore, we analyze the impact of map design changes on MLLM performance to explore ways to improve model robustness, geographic knowledge reliance, and Map-VQA performance.