This paper presents GenZ-LTL, a novel method utilizing Linear Temporal Logic (LTL) to solve the generalization problem in Reinforcement Learning (RL) for complex, temporally extended task goals and safety constraints. GenZ-LTL decomposes LTL specifications into a sequence of reach-avoid subgoals using the structure of Büchi automata. Unlike existing methods, it achieves zero-shot generalization by solving each subgoal one by one using a safe RL formulation, rather than conditioning on the subgoal sequence . Furthermore, it introduces a novel subgoal-induced observation reduction technique to mitigate the exponential complexity of subgoal-state combinations under realistic assumptions. Experimental results demonstrate that GenZ-LTL significantly outperforms existing methods in zero-shot generalization.