This paper explores the influence of character and context on the behavior of large-scale language models (LLMs), which are used as human-like decision-making agents in social science and applied fields. Specifically, we propose and validate a method for examining, quantifying, and modifying the internal representations of LLMs using the Dictator Game, a classic behavioral experiment examining fairness and prosocial behavior. We demonstrate that extracting "variable change vectors" (e.g., from "male" to "female") from the LLM's internal state and manipulating these vectors during inference can significantly alter how variables relate to the model's decisions. This approach provides a principled method for studying and regulating how social concepts can be encoded and designed within Transformer-based models, and presents Takeaways for the alignment, debiasing, and design of AI agents for social simulation in academic and commercial applications. This can contribute to enhancing sociological theory and measurement.