This paper presents a method to adjust the feature distribution of data or internal representations of large-scale language models (LLMs) to an ideal distribution that guarantees fair outcomes across groups to address the "biased input, biased output" problem in fair machine learning. The ideal distribution is defined as one in which a minimizer for a cost-sensitive risk achieves accurate group fair outcomes (e.g., demographic equality, equal opportunity). In other words, there is no fairness-utility trade-off. We formulate an optimal adjustment program that finds the ideal distribution closest to the KL-divergence and provide an efficient algorithm when the underlying distribution comes from a well-known parametric population (e.g., normal distribution, log-normal distribution). We experimentally validate the optimal adjustment technique on synthetic and real-world datasets, demonstrating that it improves fairness without compromising usability (and sometimes even improving it). Affine adjustment of the LLM representation reduces bias in multi-class classification (e.g., job prediction from short biographies in the Bios dataset). We also adjust the LLM's internal representation to the desired output to ensure uniform performance across diverse groups.