Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Coordination Requires Simplification: Thermodynamic Bounds on Multi-Objective Compromise in Natural and Artificial Intelligence

Created by
  • Haebom

Author

Atma Anand

Thermodynamic Coordination Theory

Outline

This paper presents fundamental thermodynamic constraints faced by information processing systems coordinating multiple agents and goals. Coordination foci with maximum utility are subject to greater selection pressure on the likelihood of discovery among agents than on accuracy. The information-theoretic minimum description length of the coordination protocol for precision $\varepsilon$ is extended to $L(P)\geq NK\log_2 K+N^2d^2\log (1/\varepsilon)$, which leads to progressive simplification. Coordination dynamics alter the environment itself and shift optimization across hierarchical levels. Deviation from the original foci necessitates recalibration, generating persistent metastable states and hysteresis until a significant environmental change triggers a phase transition via spontaneous symmetry breaking. We define a coordination temperature to predict critical phenomena, estimate the cost of coordination operations, and identify measurable features across systems ranging from neural networks to restaurant bills to bureaucracies. We also extend Arrow's theorem on the impossibility of consistent preference sets, finding that preferences are recursively bound whenever they are combined. This can explain the alignment violations in large-scale language models trained using infinite loops of multi-objective gradient descent and reinforcement learning with human feedback. This framework is called thermodynamic tuning theory (TCT), and it shows that tuning requires rapid information loss.

Takeaways, Limitations

The focus of adjustment is under greater selection pressure on discoverability than on accuracy.
We present an information-theoretic minimum description length for the coordination protocol.
Coordination dynamics change the environment and shift optimization across hierarchical levels.
Any deviation from the original focus requires readjustment, creating metastable states and hysteresis.
Define the adjustment temperature to predict critical phenomena and estimate the cost of operation.
By extending Arrow's theorem, we find that preferences are recursively bound when combined.
Explaining multi-objective gradient descent and alignment violations in large-scale language models.
Adjustment requires rapid information loss.
👍