In this paper, we present a causal knowledge transfer framework to address the knowledge transfer problem of multi-agent reinforcement learning (MARL) in unpredictable environments. Effective knowledge transfer between agents in unpredictable environments with changing goals is a challenging task. This study enables agents to learn and share concise causal representations of paths in the environment. When environmental changes such as new obstacles occur, conflicts between agents are modeled as causal interventions, and these are implemented as recovery action sequences (macros) to bypass obstacles and increase the probability of goal achievement. These recovery action macros are transferred online from other agents without retraining, and applied as lookup model queries using local context information (conflicts).