This paper explores the concept of "power," a key concept in AI safety. It addresses the pursuit of power as a goal in AI, the sudden or gradual loss of human power, and the balance of power in human-AI interactions and international AI governance. Simultaneously, power, as the ability to pursue multiple goals, is essential to human well-being. This paper explores the idea of promoting both safety and well-being by enabling AI agents to explicitly enhance human power and manage the power balance between humans and AI agents in a desirable manner. Using a principled and partially axiomatic approach, we design a parameterizable and decomposable objective function that represents human power inequality and risk-averse long-term aggregation. This objective function takes into account bounded human rationality and social norms, and importantly, diverse human goals. We derive an algorithm for computing this metric via backward induction or a form of multi-agent reinforcement learning from a given world model. We illustrate the results of (smoothly) maximizing this metric in various situations and explain what instrumental subgoals it entails. Careful evaluation suggests that gently maximizing an appropriate aggregate measure of human power may constitute a more beneficial goal for safe agent AI systems than a direct utility-based goal.