LLM-MARL is an integrated framework that integrates large-scale language models (LLMs) with multi-agent reinforcement learning (MARL) to enhance coordination, communication, and generalization in simulated game environments. It features three modular components: a coordinator that dynamically generates subgoals, a communicator that facilitates symbolic inter-agent messaging, and a memory that supports episodic memory. Training combines PPO with language-conditional loss and LLM query gating. LLM-MARL has been evaluated on Google Research Football, MAgent Battle, and StarCraft II, consistently outperforming MAPPO and QMIX in win rate, coordination score, and zero-shot generalization. Ablation studies demonstrate that subgoal generation and language-based messaging significantly improve performance. Qualitative analysis reveals emergent behaviors such as role specialization and communication-based tactics. This research bridges language modeling and policy learning to design intelligent and cooperative agents in interactive simulations. This presents how LLM can be leveraged in multi-agent systems used for training, gaming, and human-AI collaboration.