Europe’s AI crackdown looks doomed to be felled by Silicon Valley lobbying power
Brussels: Wednesday will be a fateful day in Brussels, a faraway city of which post-Brexit Britain knows little and cares less. It’s the day on which the EU’s AI proposals enter the final stages of a tortuous lawmaking process.
The bill is a landmark (first in the world) attempt to seriously regulate artificial intelligence (AI) based on its capacity to cause harm and will soon be in the final phase of the legislative process – so-called “trilogues” – where the EU parliament, commission and council decide what should be in the bill, and therefore become part of EU law. Big day, high stakes, in other words.
However, the bill is now hanging in the balance because of internal disagreement about some key aspects of the proposed legislation, especially those concerned with regulation of “foundation” AI models that are trained on massive datasets. In EU-speak these are “general-purpose AI” (GPAI) systems – ones capable of a range of general tasks (text synthesis, image manipulation, audio generation and so on) – such as GPT-4, Claude, Llama etc. These systems are astonishingly expensive to train and build: salaries for the geeks who work on them start at Premier League striker level and go stratospheric (with added stock options); a single 80GB Nvidia Hopper H100 board – a key component of machine-learning hardware – costs £26,000, and you need thousands of them to build a respectable system. Not surprisingly, therefore, there are only about 20 firms globally that can afford to play this game. And they have money to burn.
Why are these foundation models important? The clue is in the name: they have become the base on which the next phase of the tech future is being built – just as the world wide web in the early 1990s became the foundation on which our current online world was constructed. GPAIs will thus be the basis for innumerable new applications – mostly created by small companies and startups – which implies that any “issues” (flaws, security vulnerabilities, manipulative algorithms etc) in foundation models will inevitably ripple through and infect the networked world.
In metaphorical terms, it’s as if we were building a new global system for supplying drinking water. The GPAIs are the giant reservoirs from which we – both corporations and individuals – will get our drinking water. And, thus far, all of the reservoirs are owned and controlled by US companies. So we have a vital interest in knowing how the water going into the reservoirs is filtered, purified, enhanced. What additives, preservatives, microbes, supplements have been added by the reservoir owners?
At the heart of the arguments now going on in Brussels is that the big tech companies – the owners of those metaphorical reservoirs – do not welcome the idea that regulators would be able to inspect what they are doing to the water. But until recently it seemed that many members of the European parliament – the only central democratic body in the EU – were determined to include that level of scrutiny in the AI legislation.
Hoping for ethical behaviour from such outfits is like praying for the conversion of China to Catholicism
And then something changed. Suddenly, the French, German and Italian governments combined to advocate less intrusive regulation of foundation models. According to these three musketeers, what Europe needs is a “regulatory framework which fosters innovation and competition, so that European players can emerge and carry our voice and values in the global race of AI”. And so the right approach is not to impose legal regulation on the (mostly American) companies dominating the AI racket, but to allow self-regulation through “company pledges and codes of conduct”.
Eh? Can it be that no high officials in these three countries have been following the behaviour of US tech companies since the dawn of the web? Hoping for ethical behaviour from such outfits is like praying for the conversion of China to Catholicism. More proximately, have they not noticed what happened only the other day when OpenAI’s board, ostensibly charged with ensuring that the company’s foundation models were good for humanity, was suddenly replaced by people with less elevated objectives, namely the maximisation of shareholder value.
Sadly, the Franco-German-Italian volte face has a simpler, more sordid, explanation: the power of the corporate lobbying that has been brought to bear on everyone in Brussels and European capitals generally. And in that context, isn’t it interesting to discover (courtesy of an investigation by Time that while Sam Altman (then and now again chief executive of OpenAI after being fired and rehired) had spent weeks touring the world burbling on about the need for global AI regulation, behind the scenes his company had lobbied for “significant elements of the EU’s AI act to be watered down in ways that would reduce the regulatory burden on the company”, and had even authored some text that found its way into a recent draft of the bill.
So, will the EU stand firm on preventing AI companies from marking their own homework? I fervently hope that it does. But only an incurable optimist would bet on it.
A nice essay on the Cambridge University Press website about the dilemma that faced the Jewish owners of the Hotel Kaiserhof when Hitler chose it as his Berlin base before he became chancellor.
In 2016 the New Yorker ran a prescient profile of Sam Altman, the chief executive of OpenAI, which makes interesting reading now.
A nice reflective essay by Ed Simon on the Millions website about the invention and development of photography.