Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Individual utilities of life satisfaction reveal inequality aversion unrelated to political alignment

Created by
  • Haebom

Author

Crispin Cooper, Ana Fredrich, Tommaso Reggiani, Wouter Poortinga

Outline

This study examined people's willingness to prioritize social well-being and trade-offs between fairness and individual well-being through a stated preference experiment with a representative UK sample (n=300). We estimated individual-level utility functions using the expected utility maximization (EUM) framework and tested their sensitivity to small-probability overestimation using cumulative prospect theory (CPT). The majority of participants exhibited concave (risk-averse) utility curves and a stronger aversion to inequalities in social life satisfaction than to individual risk. These preferences were unrelated to political affiliation, suggesting a shared normative stance on the fairness of happiness across ideological boundaries. The findings raise concerns about the use of average life satisfaction as a policy indicator and support the development of nonlinear utility-based alternatives that more accurately reflect collective human values. We discuss Takeaways for public policy, happiness measurement, and the design of value-aligned AI systems.

Takeaways, Limitations

Takeaways:
It presents public opinion on prioritizing social well-being and the trade-offs between fairness and individual well-being.
We point out the limitations of using average life satisfaction as a policy indicator and emphasize the need for nonlinear utility-based alternatives.
It shows that normative positions on the fairness of happiness are shared across ideological boundaries.
Provides Takeaways for designing public policy, happiness measurement, and value-aligned AI systems.
Limitations:
The sample size (n=300) is relatively small, which may limit generalizability.
Due to the nature of stated preference experiments, there may be discrepancies with actual behavior.
Because the study was conducted on a British population, generalizations to other countries should be made with caution.
👍