Pointing out the lack of tools for interpreting value tradeoffs in large-scale language models (LLMs), we present research evaluating LLMs' value tradeoffs using cognitive models from cognitive science. Specifically, we analyze the model's inference effort and the dynamics of reinforcement learning (RL) post-training using a cognitive model of polite language use. We find that the model's default behavior prioritizes informational utility over social utility, and that this pattern changes in a predictable manner when prompted to prioritize specific goals. Furthermore, we study the LLM's training dynamics, revealing that the choice of base model and pre-training data significantly influences value changes. The proposed framework can contribute to identifying value tradeoffs across various model types, generating hypotheses about social behaviors such as flattery, and designing training methods that control the balance between values during model development.