Wall Street certainly seems, despite recent wobbles, to be optimistic about the future of artificial intelligence. Despite some chaotic policymaking, the imposition of the steepest tariffs since the 1930s, a lengthy government shutdown and mounting concerns about the health of the jobs market, the benchmark S&P 500 index has advanced around 15% this year and the tech-heavy NASDAQ by more like 20%. Much of that price appreciation has been propelled by an AI-growth story.
And the biggest players in the AI market are, understandably enough, perhaps even more optimistic. Between now and 2030, the largest tech players are planning to collectively spend around $5 trillion on the infrastructure required to support the wider use of artificial intelligence systems. Even in macroeconomic terms, that is a very large number.
On the other hand, as the Economist recently reported, there are signs that the pace of AI adoption in the wider economy may be slowing or even tailing off.
Researchers at the Census Bureau ask firms if they have used artificial intelligence “in producing goods and services” in the past two weeks. Recently, we estimate, the employment-weighted share of Americans using AI at work has fallen by a percentage point, and now sits at 11%. Adoption has fallen sharply at the largest businesses, those employing over 250 people. Three years into the generative-AI wave, demand for the technology looks surprisingly flimsy.
That Census Bureau data needs to be treated with some caution. The question posed to firms, many argue, is too restrictive and other measures of AI usage – by both firms and workers – point to higher usage.
The most recent study by McKinsey – the global consulting firm – showed something of a mixed picture. They found widespread AI usage, but often at the pilot stage.
Our latest survey shows a larger share of respondents reporting AI use by their organizations, though most have yet to scale the technologies. The share of respondents saying their organizations are using AI in at least one business function has increased since our research last year: 88 percent report regular AI use in at least one business function, compared with 78 percent a year ago. But at the enterprise level, the majority are still in the experimenting or piloting stages, with approximately one-third reporting that their companies have begun to scale their AI programs.
In the medium to longer term, the question of whether or not the multi-trillion dollar bet on AI will pay off is one of the biggest facing the global economy.
Global productivity growth, especially in the rich world, has been poor since the financial crisis of 2007-2009, and AI is commonly cited as one potential reason for optimism.
This week, the Clark Center’s US and European Experts Panels offered their own views – with reassuringly similar results.
Both panels were asked whether ‘adoption of artificial intelligence will lead to a substantial increase in the growth rates of real per capita income in the US and Western Europe over the next ten years?’.
Weighted by confidence, 4% of US respondents strongly agreed, 42% agreed, 47% were uncertain, and 8% disagreed.
In the European poll, also weighted by confidence, 2% strongly agreed, 42% agreed, 50% were uncertain, and 6% disagreed.
Clearly, having a level of certainty that AI was unlikely to substantially increase the growth rate of per capita income was a minority view. Daron Acemoglu of MIT was one of those who did, pointing to some of his recent work on the subject, whilst noting that “substantial growth effects require applications that can increase productivity. These aren’t around yet. While future capabilities could be much better than now, it takes time for these applications to be perfected and then adopted in complex organizations”.
By contrast, Chad Syverson of Chicago Booth (who also pointed to some of his own related work) argued that “I expect substantial gains. It is possible, even likely, that the initially measured productivity gains will understate true productivity growth (though with a corresponding overstatement in future years)”. While Olivier Blanchard of the Peterson Institute not only agreed but even put a number on it, noting that “the usual caveat about uncertainty being particularly large in this case. But I put the increase in productivity growth (not necessarily output growth) at 0.5% or more with prob >.5”.
A plurality of respondents noted the high level of uncertainty. This should be no surprise given that the question involved taking a view on not only the pace of technological change over the coming decade but also the rate of adoption of this new and still developing technology and how it will be used. As Aaron Edlin of Berkeley argued, “AI may lead to higher productivity. AI may lead to less education and lower productivity. AI may be trained on junk data created by AI trained on junk data created by AI in the future. The future of AI is even more uncertain than the future usually is”.
Accurately forecasting how any new technological change will play out over the coming decade(s) is notoriously difficult. Thirty years ago, in 1995, there were those who believed that the internet would fundamentally reshape and remake the economy and those who believed it was likely just a fad. The internet has indeed transformed how society operates and created entirely new – some previously undreamed of – business models for some firms whilst destroying others. It has affected how almost every form operates on some level. And yet, despite thirty years of visible internet-driven change in culture and commerce, the impact on trend productivity growth remains a matter of debate. When it comes to the economic impact of AI, it is no surprise that uncertainty levels are so high.
