While reinforcement learning (RL) algorithms can allow a single agent to find an optimal policy for a specific task, many real-world problems require the collaboration of multiple agents to achieve a common goal. In distributed multi-agent RL (DMARL), agents learn independently and then combine policies at runtime. However, the combination typically requires constraints on the compatibility of local policies to achieve a global task. In this paper, we study how providing high-level symbolic knowledge to agents can help address unique challenges in this setting, such as privacy constraints, communication limitations, and performance issues. Specifically, we extend the formal tool used to verify the compatibility of local policies among team actions, enabling distributed learning with theoretical guarantees to be used in more scenarios. Furthermore, we experimentally demonstrate that symbolic knowledge of the temporal evolution of events in the environment can significantly accelerate the learning process in DMARL.