This paper proposes LLM-MCoX (LLM-based Multi-robot Coordinated Exploration and Search), a novel framework that leverages Large Language Models (LLMs) to address the challenges of autonomous navigation and object retrieval in unknown indoor environments for multi-robot systems (MRSs). This framework combines LiDAR scan processing, omnidirectional cluster extraction, doorway detection, and multimodal LLM (e.g., GPT-4o) inference to generate coordinated waypoint assignments based on a shared environment map and robot states. LLM-MCoX outperforms existing greedy and Voronoi-based planners. Specifically, it reduces navigation time by 22.7% and improves search efficiency by 50% in a large-scale environment with six robots. Furthermore, LLM-MCoX enables natural language-based object retrieval, enabling human operators to provide high-level semantic guidance that traditional algorithms cannot interpret.