Artificial Intelligence (AI) technology continues to advance rapidly and grab the attention of investors, policymakers, and media opinion formers.
For the latter, the ability of the latest generation of large language models (LLMs) to not only write increasingly passable copy but to do so in the style of named authors is causing more than a few brows to be furrowed in worry.
Regulators, as is often the case with a new technology, are struggling to keep up.
The policy concerns around AI can be divided into two broad categories. First, there is the more straightforward economic policy issue of antitrust and fears of excessive market concentration and the potential for excess profits and damage to consumers. Then there is the wider public safety concern about potential harms from AI which, in the most extreme cases, are sometimes stretched to include potential extinction-level events for humanity. Whilst neither of these issues directly relates to columnists worrying about being out of a job, they at least give columnists an excuse to continue obsessing over the new technology.
Jonathan Kanter, the US Assistant Attorney General who runs antitrust policy efforts, has repeatedly clashed with big tech firms over the course of his three years in office. Earlier this month he told the Financial Times that he was now looking at AI “with urgency”. He is examining “monopoly choke points and the competitive landscape”.
The Justice Department and the Federal Trade Commission (FTC) have reportedly reached an agreement whereby the DOJ will lead on investigating Nvidia whilst the FTC will lead on Microsoft and OpenAI.
Over the last two weeks, both the Clark Center’s US and European Panels have weighed in on the anti-trust and wider policy concerns.
There was something of a trans-Atlantic divide on the answers. Asked whether US antitrust investigations were warranted by the need to foster competition and innovation in these industries 23% of the European respondents strongly agreed, 58% agreed, 17% were uncertain and just 2% disagreed when weighted by confidence. That is a strong endorsement from the experts.
For the US Panel, the question was somewhat different and taken in two parts.
Asked whether antitrust investigations of the dominant firms in AI would lead to lower prices for firms and consumers, 44% of respondents (again, and as with all figures quoted below weighted by confidence) were uncertain, whilst just 22% agreed and 34% disagreed.
Asked whether such investigations would likely promote greater competition and innovation in AI 37% were uncertain, 39% either strongly agreed or agreed and 24% either disagreed or strongly disagreed.
Whilst the questions are not exact matches, that is an unusually wide difference of opinion between the panels. For the Europeans, the case for a US antitrust investigation is almost clearcut with almost three quarters of respondents agreeing and very little active dissension. By contrast, the US panels are far less certain that such investigations will achieve much in the way of lower prices for users of AI or greater competition and innovation within AI.
One reading of this divergence would be that the Europeans were not asked about the likely impacts of antitrust investigations but instead whether they were warranted, whilst the US Panel was not asked if such investigations were warranted but instead focussed on the likely effects. It is probably a stretch to believe that the European panel strongly believe such investigations are required if they do not also expect them to have some positive upsides. Americans might like to occasionally joke that Europeans enjoy bureaucracy, but they do not generally favor drawn-out semi-judicial processes just for the fun of it.
Several US respondents, rightly, emphasized the uncertainty caused by the definitional issues surrounding AI. With tech firms keen to rebrand various machine learning and statistical packages as forms of AI to show investors exposure to the hot new thing, it is not exactly clear what is AI and what is not. That certainly complicates any policy response or even formal investigation.
Moving beyond the immediate question of antitrust policy, the panels were also asked about the principles underlying AI regulation.
The US panel was asked whether potential harms from AI were better assessed by market deployment than by seeking to slow the pace of AI research and implementation.
40% were uncertain, 36% either disagreed or strongly disagreed and 24% agreed or strongly agreed.
The European Panel were asked the question from the other angle. Specifically whether “seeking to slow the pace of artificial intelligence use and implementation would be a more effective means of assessing potential harms from the technologies than market deployment and ex post assessment”.
Again, the plurality of respondents (37%) were uncertain. 35% either strongly disagreed or disagreed and 29% agreed.
Amongst both panels a frequent question was whether the pace of AI use and implementation could, in any way, be practically slowed. One worry was that US – or indeed European – steps to do so would simply see AI research and roll-out moving overseas. Whilst US firms may currently have the technological edge, few doubt that AI innovation is happening at the global level.
Perhaps this then is the key issue. Cynics, for example, might well note that the European Experts were more keen on a US antitrust investigation of US firms than the US Experts. AI is a rapidly developing technology which could play a large role in increasing global productivity and economic well-being in the longer run. It also has some potentially dangerous spillovers. The challenge is finding a way to regulate this at the global level which does not penalize or promote the firms of any given state.