Over the past two months, alongside the regular coverage of the Clark Center’s polls and other news, On Global Markets has been reporting on the discussions held at the Economic Experts Conference 9/10 October. Most of those sessions were held under the Chatham House Rule.
The development of artificial intelligence (AI), in its various guises, poses many different economic and policy issues. Some of the most interesting come in the realm of competition and antitrust policy.
Polls over the past couple of years, both the US and European Expert Panels, have – broadly – come to similar views of the raw economics of the structure of the AI industry. While uncertainty is understandably high, as one might expect of any rapidly developing sector, the panels expect AI to be a concentrated industry, where firms hold a great deal of market power, but also one with the potential for new entrants who have not previously been associated with digital markets.
That sort of structure is probably what one might expect from any industry where a rapidly developing technology offers potential returns to scale. One participant drew an analogy with the automotive industry at the start of the twentieth century. It was not clear initially which firms would go on to dominate car production, but it was already obvious that those who took an early lead had the potential to build a strong market position.
As several conference participants noted, one major source of uncertainty is that while the potential for market power – and abuses of market power – is clearly present, it remains unclear where in the AI ecosystem of chipmakers, model makers or consumer and business-facing products that power will reside.
There are also several, rather obvious, routes through which large existing tech firms can seek to leverage their existing customer base into a stronger position in AI. Notably, as one conference participant argued, Google has added AI summaries into its search results.
There was no clear agreement on whether or not the structure of the AI industry presented a new problem for competition policymakers, but there was widespread consensus that mergers are something to watch closely. Several participants stressed that ‘mergers lite’ could matter just as much, and regulators need to be aware of potential sources of market power coming from partnerships, tie-ups and joint ventures just as much as formal mergers.
There was also a sense that regulators, especially in Europe, felt that they had missed an opportunity in the late 2000s and early 2010s as the large digital platforms emerged and were determined to be more proactive this time around.
The EU has already legislated to throw some grit and friction into AI product development, aiming to ensure that such products are ‘safe’ before they come to market. The question, for one participant, was whether the gains from more safety were worth the price in terms of administrative expenses and potentially slower innovation.
There was a sense that some sort of United States Federal AI regulation might be favoured by the firms themselves. Mostly, as several states, including California, have already put statewide laws in place, and more are set to follow. Firms may well prefer to deal with one nationwide regulatory regime rather than with dozens of competing state-level ones.
Some participants also pondered whether, in the absence of a legislative regime, court decisions and tort might play a large role. There are several cases making their way through the legal system of people seeking redress for supposed AI harms, and if punitive damages are awarded, they could serve as a road map for others bringing their own cases.
There was a great deal of scepticism that talk of slowing down AI development, or putting safety concerns first, was practical in the United States. The opportunity cost of Europe’s actions was arguably relatively low; few think that in the counter-factual of no EU AI legislation that Europe would be a center of AI innovation. While in the United States, the perceived competition was China was something of a trump card for those arguing against anything which could slow progress.
What is more, it was noted by several participants that the EU – and other nations’ – steps to regulate AI were more often than not framed in terms of safety rather than competition.
The states, though, when it comes to competition policy, are – if the most optimistic projections of AI’s potential come to pass – extremely high. One participant noted that in macro terms, the optimists’ case usually focuses on the growth-enhancing productivity boost and how that would impact the labor market. In a world in which AI is a general-purpose technology which causes the costs of production of almost everything to plummet and demand for labor to collapse, the results might not be too painful. Yes, there would be much less work available, but everything would be much cheaper. People could presumably work less, consume just as much (if not more) and enjoy much more leisure time.
But what happens when you add market power and monopoly pricing into this scenario? In economic terms, when do costs and prices begin to diverge? A world in which demand for labor collapses but prices do not would be much more unpleasant.
That is, of course, an extreme scenario, but one worth keeping in mind when thinking through how to regulate the markets for AI.
