This paper presents novel characterization and application techniques specific to GenAI to address fairness issues arising when deploying generative AI (GenAI) models. Unlike conventional AI that performs specific tasks, GenAI's broad functionality requires conditional fairness tailored to the context in which it is generated (e.g., demographic fairness in generating images of poor and successful business people). We define two levels of fairness: the first evaluates the fairness of generated outputs independent of prompts and models, and the second evaluates intrinsic fairness using neutral prompts. Given the complexity of GenAI and the difficulty of specifying fairness, we focus on limiting the worst-case scenario by deeming the GenAI system unfair if the distance between the appearances of a specific group exceeds a predefined threshold. We also explore combinatorial testing to assess the relative completeness of cross-sectional fairness. By limiting the worst-case scenario, we develop a prompt injection method that applies conditional fairness with minimal intervention within an agent-based framework, and validate it on a state-of-the-art GenAI system.