Modeling human variability in rating tasks is crucial for personalization, multi-factorial model alignment, and computational social science. In this paper, we represent individuals using natural language value profiles, which are descriptions of underlying values compressed from contextual demonstrations, and propose a manipulable decoder model that estimates individual ratings from rater representations. We employ information-theoretic methods to measure the predictive information of rater representations and find that demonstrations contain the most information, followed by value profiles and then demographics. However, value profiles effectively compress useful information from demonstrations (preserving over 70% of information) and offer advantages in reviewability, interpretability, and manipulability. Furthermore, clustering value profiles to identify individuals with similar behavior better explains rater variability than demographic groupings, which are often the most predictive. Beyond test set performance, we demonstrate that decoder predictions vary with semantic profile differences, are well-calibrated, and can help account for instance-level discrepancies by simulating annotator populations. These results demonstrate that value profiles offer a novel and predictive way to explain individual variability beyond demographic or group information.