This paper presents GenZ-LTL, a novel linear-temporal logic (LTL)-based method for generalizing to complex, time-consuming task objectives and safety constraints in reinforcement learning (RL). GenZ-LTL leverages the structure of Büchi automata to decompose LTL task specifications into reach-avoid subgoal sequences. Unlike existing methods, it achieves zero-shot generalization by solving each subgoal one by one using a safe RL formulation, rather than conditioning on the subgoal sequence . Furthermore, it introduces a novel subgoal-induced observation reduction technique that mitigates the exponential complexity of subgoal-state combinations under realistic assumptions. Experimental results demonstrate that GenZ-LTL significantly outperforms existing methods in zero-shot generalization.