The 2025 AI Index Report, published by Stanford University’s Human-Centered AI Institute (HAI), is a comprehensive analysis of the state of AI development around the world. This eighth report tracks and visualizes various aspects of AI technology performance, economic impact, education, policy, and responsible AI based on data, providing an empirical foundation for understanding the rapid development of AI. • AI technology performance continues to improve at an incredible pace • The US remains the leader in core model development, while China is rapidly closing the gap • Corporate AI investment hits record high, government regulation also increases • AI is quickly becoming a part of our daily lives, reducing costs and increasing efficiency • Imbalance in the development of a responsible AI ecosystem, clear differences in AI awareness among countries • AI contribution to science increases, but reasoning ability remains a challenge Steady improvement in AI technology performance In just one year, AI performance has improved dramatically on demanding benchmarks such as MMMU, GPQA, and SWE-bench, which were introduced in 2023. The score increased by 18.8% in MMMU, 48.9% in GPQA, and 67.3% in SWE-bench. The latest AI models have also shown significant improvements in their ability to generate high-quality videos, and in some environments, agent AI models have even outperformed humans. Of particular note is that the performance gap between the top and top 10 models in key benchmarks has narrowed from 11.9% to 5.4% in one year, and the gap between the top two models is just 0.7%, suggesting that the competition for cutting-edge AI technology is intensifying. AI permeates everyday life From healthcare to transportation, AI is quickly moving from the lab to everyday life. As of August 2024, the FDA has approved 950 AI-based medical devices, up from 6 in 2015 and 221 in 2023. Self-driving cars on the road are no longer experimental. Waymo, the leading autonomous vehicle operator in the United States, now provides more than 150,000 autonomous rides per week. AI models are becoming more efficient, cheaper, and more accessible. The inference cost of a GPT-3.5-level system has decreased by more than 280x between November 2022 and October 2024. At the hardware level, costs have decreased by 30% year-over-year, and energy efficiency has improved by 40% year-over-year. In addition, open-weight models are narrowing the gap with closed models, with the performance gap in some benchmarks shrinking from 8% to 1.7% in one year. These trends are rapidly lowering the barrier to advanced AI. Active AI investment and model development competition by companies Private investment in AI in the United States is projected to grow to $109.1 billion in 2024, compared to $9.3 billion in China and $4.5 billion in the United Kingdom. In particular, generative AI attracted $33.9 billion in private investment worldwide, up 18.7% from 2023. Corporate adoption of AI is also accelerating, with 78% of organizations reporting that they will use AI in 2024, up from 55% the previous year. In model development, the US will produce 40 notable AI models in 2024, far ahead of China’s 15 and Europe’s 3. But while the US still leads in quantity, Chinese models are rapidly narrowing the quality gap. The performance gap in key benchmarks such as MMLU and HumanEval has narrowed from double digits in 2023 to near parity in 2024. Meanwhile, China continues to lead in AI publications and patents. However, it is also worth noting that the cost of training AI models is increasing significantly. The cost of training Google’s Gemini 1.0 Ultra model is estimated to be around $192 million. This estimate is based on training time, hardware type, and quantity. In general, as the number of model parameters, training time, and the amount of training data continue to increase, the training cost also increases. Responsible AI and the Global Perception Gap AI-related incidents are rapidly increasing, but standardized responsible AI (RAI) assessments are still rare among major industry model developers. However, new benchmarks such as HELM Safety, AIR-Bench, and FACTS provide promising tools for assessing realism and safety. There is still a gap between companies recognizing RAI risks and taking meaningful action. Meanwhile, governments are showing increased urgency. In 2024, global collaboration on AI governance will be strengthened, with organizations including the OECD, EU, UN, and African Union publishing frameworks focusing on transparency, trust, and other core RAI principles. Globally, optimism about AI is growing, but deep regional divides still exist. In countries such as China (83%), Indonesia (80%), and Thailand (77%), a majority of people see AI products and services as having more benefits than harms. In contrast, optimism remains much lower in places such as Canada (40%), the United States (39%), and the Netherlands (36%). However, this sentiment is changing. Since 2022, optimism has grown significantly in countries that were previously skeptical, including Germany (+10%), France (+10%), Canada (+8%), the United Kingdom (+8%), and the United States (+4%).