This paper proposes GenZ-LTL, a novel method utilizing Linear Temporal Logic (LTL) to address the generalization problem of Reinforcement Learning (RL) with complex, time-consuming task objectives and safety constraints. To overcome the limitations of existing methods, which struggle to handle nested, long-term tasks and safety constraints and fail to find alternatives when subgoals are unattainable, GenZ-LTL leverages the structure of Büchi automata to decompose LTL task specifications into a series of reach-avoid subgoals. Unlike conventional methods that condition the subgoal sequence, GenZ-LTL achieves zero-shot generalization by solving subgoals one by one using a safe RL formulation . Furthermore, it introduces a novel subgoal-induced observation reduction technique to mitigate the exponential complexity of subgoal-state combinations under realistic assumptions. Experimental results demonstrate that GenZ-LTL significantly outperforms existing methods in zero-shot generalization.