Stanford’s AI Index Highlights Growing Divide Between Open and Proprietary Models

 In 2024, nearly 90% of the world’s most notable AI models came out of industry. Just a year prior, that figure was 60%. As the scale, cost, and compute required to build frontier AI systems continue to rise, the gap between academic and corporate development has grown harder to ignore.  The latest AI Index Report from Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) offers a snapshot of this accelerating shift. Academia remains a major driver of foundational research, still producing a significant share of highly cited papers. But it is increasingly outpaced when it comes to building the largest and fastest models. Training compute requirements now double every five months, and datasets double every eight, while energy use rises annually. At the edge of frontier AI innovation, the bar for entry is climbing fast.  This trend has triggered alarm in scientific communities that depend on access to advanced AI systems to do their work. As the costs and infrastructure needs of model development scale beyond the reach of academic labs, there are rising concerns about reproducibility, transparency, and scientific independence. But the story isn’t entirely bleak. According to the report, open-weight models are beginning to close the performance gap with their closed-source industry counterparts. On some key benchmarks, the difference between open and closed models shrank from 8% to just 1.7% in the span of a year. That rapid progress offers hope to researchers, educators, and public-sector institutions who rely on open tools to build domain-specific applications or evaluate new methods. Also working in favor of accessibility is a dramatic drop in inference costs. Between November 2022 and October 2024, the cost to run a system performing at the level of GPT-3.5 fell by more than 280-fold. Hardware costs have declined by 30% annually, and energy efficiency is improving at a rate of 40% per year. These trends are helping to lower the barrier to entry for AI developers and users outside of hyperscale AI, even if the barrier to training frontier models remains high.  Still, the concentration of frontier innovation raises broader questions. Industry now controls the majority of influential models, and competition at the top is tightening. According to Stanford, the performance gap between the first- and tenth-ranked models dropped from 11.9% to 5.4% in just one year. The frontier is not only rapidly advancing but is also becoming increasingly crowded. For institutions without access to proprietary tools, data, or compute, the window to participate meaningfully may be shrinking. The stakes are especially high for science. From climate modeling to biomedical research, access to the latest and greatest AI systems can directly influence the speed and scope of discovery. That makes the health of the open source AI ecosystem not just a technical issue, but a scientific one. As open models grow more capable, and as efforts to build collaborative, transparent tools gain momentum, the hope is that researchers will retain at least some room to experiment and innovate at the edge, without needing a corporate partner or a billion-dollar budget. Amid rising concerns over access and control in AI development, Stanford HAI's report also emphasized that the technology’s impact extends far beyond the lab: “AI is a civilization-changing technology — not confined to any one sector, but transforming every industry it touches,” said Russell Wald, executive director at Stanford HAI and member of the AI Index Steering Committee, in a release. “Last year we saw AI adoption accelerate at an unprecedented pace, and its reach and impact will only continue to grow. The AI Index equips policymakers, researchers, and the public with the data they need to make informed decisions — and to ensure AI is developed with human-centered values at its core.” In the race to define the future of AI, who gets to build at the frontier and who gets left behind may be just as important as how powerful the models become.  Access the full Stanford HAI 2025 AI Index Report at this link. 

Apr 9, 2025 - 18:02
 0
Stanford’s AI Index Highlights Growing Divide Between Open and Proprietary Models

 In 2024, nearly 90% of the world’s most notable AI models came out of industry. Just a year prior, that figure was 60%. As the scale, cost, and compute required to build frontier AI systems continue to rise, the gap between academic and corporate development has grown harder to ignore. 

The latest AI Index Report from Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) offers a snapshot of this accelerating shift. Academia remains a major driver of foundational research, still producing a significant share of highly cited papers. But it is increasingly outpaced when it comes to building the largest and fastest models. Training compute requirements now double every five months, and datasets double every eight, while energy use rises annually. At the edge of frontier AI innovation, the bar for entry is climbing fast. 

This trend has triggered alarm in scientific communities that depend on access to advanced AI systems to do their work. As the costs and infrastructure needs of model development scale beyond the reach of academic labs, there are rising concerns about reproducibility, transparency, and scientific independence.

But the story isn’t entirely bleak. According to the report, open-weight models are beginning to close the performance gap with their closed-source industry counterparts. On some key benchmarks, the difference between open and closed models shrank from 8% to just 1.7% in the span of a year. That rapid progress offers hope to researchers, educators, and public-sector institutions who rely on open tools to build domain-specific applications or evaluate new methods.

Figure 2.1.34 illustrates the performance trends of the top closed-weight and open-weight LLMs on the Chatbot Arena Leaderboard, a public platform for benchmarking LLM performance. (Source: Stanford HAI AI Index Report)

Also working in favor of accessibility is a dramatic drop in inference costs. Between November 2022 and October 2024, the cost to run a system performing at the level of GPT-3.5 fell by more than 280-fold. Hardware costs have declined by 30% annually, and energy efficiency is improving at a rate of 40% per year. These trends are helping to lower the barrier to entry for AI developers and users outside of hyperscale AI, even if the barrier to training frontier models remains high. 

Still, the concentration of frontier innovation raises broader questions. Industry now controls the majority of influential models, and competition at the top is tightening. According to Stanford, the performance gap between the first- and tenth-ranked models dropped from 11.9% to 5.4% in just one year. The frontier is not only rapidly advancing but is also becoming increasingly crowded. For institutions without access to proprietary tools, data, or compute, the window to participate meaningfully may be shrinking.

The difference between the highest- and 10th-ranked models on the Chatbot Arena Leaderboard dropped from 11.9% in 2024 to just 5.4% by early 2025, reflecting tighter competition at the frontier. (Source: Stanford HAI AI Index Report)

The stakes are especially high for science. From climate modeling to biomedical research, access to the latest and greatest AI systems can directly influence the speed and scope of discovery. That makes the health of the open source AI ecosystem not just a technical issue, but a scientific one. As open models grow more capable, and as efforts to build collaborative, transparent tools gain momentum, the hope is that researchers will retain at least some room to experiment and innovate at the edge, without needing a corporate partner or a billion-dollar budget.

Amid rising concerns over access and control in AI development, Stanford HAI's report also emphasized that the technology’s impact extends far beyond the lab: “AI is a civilization-changing technology — not confined to any one sector, but transforming every industry it touches,” said Russell Wald, executive director at Stanford HAI and member of the AI Index Steering Committee, in a release. “Last year we saw AI adoption accelerate at an unprecedented pace, and its reach and impact will only continue to grow. The AI Index equips policymakers, researchers, and the public with the data they need to make informed decisions — and to ensure AI is developed with human-centered values at its core.”

In the race to define the future of AI, who gets to build at the frontier and who gets left behind may be just as important as how powerful the models become. 

Access the full Stanford HAI 2025 AI Index Report at this link.