This paper explores robust coordination for effective decision-making in multi-agent systems in partially observed environments. Specifically, we address the question of whether to directly engineer communication protocols or to learn them end-to-end. We compare two communication strategies for a collaborative task assignment problem. The first is Learned Direct Communication (LDC), an end-to-end learning approach where agents simultaneously generate messages and actions. The second is Imagined Trajectory Generation Module (ITGM), an intention communication approach that uses a compact learned world model to simulate future states and summarize them for communication. Experiments on goal-directed interactions in a grid-world environment demonstrate that while LDC is feasible in simple environments, a world-model-based approach demonstrates superior performance, sample efficiency, and scalability as complexity increases.