We present a method for generating high-quality synthetic OOD proxies by leveraging the generative capabilities of LLM, eliminating reliance on external OOD data sources. We study the effectiveness of our method on classical text classification tasks, such as toxicity detection and sentiment classification, as well as classification tasks used in LLM development and deployment, such as training reward models for RLHF and detecting misaligned productions.