This paper discusses zero-shot reinforcement learning (zero-shot RL) methods that can be applied to real-world problems. Zero-shot RL aims to generalize to new tasks or domains without training. The paper presents a method for addressing constraints on real-world data (data quality, observability, and data availability), identifies limitations of existing methods, and proposes new techniques to improve them.