Artificial Intelligence (AI) is the hot new thing. Any firm which can plausibly claim any exposure to the rapidly developing technology has seen the price of its equity soar over the past year as investors seek to cash in. The most booster-ish proponents of AI see it as a game changer for economic growth, ushering in a new era of rapidly expanding productivity. Meanwhile, the doomsayers fret that it will destroy jobs at best and possibly herald the end of humanity at worst. That is, it is fair to say, an unusually large range of scenarios.
Newspaper and online columnists and opinion writers in particular – who for some inexplicable reason seem deeply interested in a technology that could generate 800 word articles in mere moments – have been keen to highlight the potential downsides. The truth, as ever, probably lies somewhere in-between these extremes, but regardless of that the relatively sudden emergence of AI as a live issue has left governments and regulators struggling to catch up.
This week, following on from the British government organized AI Safety Summit last November, the UK and US governments agreed a bilateral deal on AI Safety. Under the new Memorandum of Understanding the two nations’ respective AI Safety Institutes will set out a common approach to AI safety testing and aim to share capacities. This is the first global bilateral AI deal, but is unlikely to be the last.
The European Union has, uncharacteristically, been the fastest mover when it comes to actual regulation. The Artificial Intelligence Act cleared the European Parliament last month and is set to come into force in 2025. The legislation is sweeping in its scope without right bans on some forms of AI such as cognitive behavioral manipulation, social scoring or biometric identification, and potential new significant obstacles to the use of AI in what the Act deems to be high risk areas. These include such potential applications as AI-assisted surgery, AI-led CV shifting in recruitment, AI exam-marking in education, and the use of AI in transport systems that could potentially endanger life.
Whilst the Brussels authorities have been keen to talk up their ‘risk based’ approach, the industry in general regards the EU measures as focused more on the potential downsides of the new technology and possibly overly restrictive. Tech bosses draw a distinction with the US and UK approaches which they see as more focussed on the upsides of growth and innovation. Nor are tech industry insiders alone when it comes to this. President Macron of France warned in December that:
“We can decide to regulate much faster and much stronger than our major competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea”.
Whilst stricter regulation may slow EU based innovation in AI, the European Commission is optimistic that a clear regulatory framework will reduce uncertainty and encourage further research and development.
EU policymakers are also hoping that their adoption of the world’s first substantive legislative set of rules for AI will allow the EU to act as an effective global rule-setter. There is precedent for this so-called Brussels effect when it comes to global tech policy and regulation. The EU’s General Data Protection Regulation (GDPR) of 2016 led to changes in practices from US firms such as Meta and Microsoft in how they handle online privacy, although quite how effective those changes actually were remains a matter of much debate.
Whether or not the EU will manage to establish itself as the global rule-setter for artificial intelligence in the same way it has for online privacy remains unclear, as does the impact of the new rules on the EU’s own domestic AI-sector.
Weighted by confidence, 49% of respondents expressed uncertainty when asked whether “the EU’s legislation to regulate artificial intelligence is likely to put European technology firms at a substantial disadvantage to their competitors elsewhere in the world”.
The panel were clearer on the second question, with a plurality – weighted by confidence – agreeing that “by providing a clear set of rules, the EU’s legislation on artificial intelligence is likely to enhance research and innovation by forms building the new technology”. Although here too, around a third of respondents were uncertain.
The high levels of uncertainty amongst the experts are understandable. The future of the underlying technology itself is hard to predict, let alone how competing regulatory frameworks will develop in the United States, China and other large markets. If the EU does indeed establish itself as a global rule-setter, then many of the potential downsides to having moved first may not arise. On the other hand, if competitors move in the direction of less stringent regulation, then EU firms may find themselves at a distinct competitive disadvantage.
Given the way in which AI has captured the attention of investors, politicians, and opinion column writers over the past two years some form of regulation was always likely. In the short term providing more certainty on what that regulation actually looks like is probably a boon for EU based AI firms. But in the longer run the success or failure of the EU’s approach will be determined by how its competitors choose to respond.