Tech giants push to dilute Europe’s AI Act
London: The world’s biggest technology companies have embarked on a final push to persuade the European Union to take a light-touch approach to regulating artificial intelligence as they seek to fend off the risk of billions of dollars in fines.
EU lawmakers in May agreed the AI Act, the world’s first comprehensive set of rules governing the technology, following months of intense negotiations between different political groups.
But until the law’s accompanying codes of practice have been finalised, it remains unclear how strictly rules around ‘general purpose’ AI (GPAI) systems, such as OpenAI’s ChatGPT will be enforced and how many copyright lawsuits and multi-billion dollar fines companies may face.
The EU has invited companies, academics, and others to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorised to speak publicly.
The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate their compliance. A company claiming to follow the law while ignoring the code could face a legal challenge.
“The code of practice is crucial. If we get it right, we will be able to continue innovating,” said Boniface de Champris, a senior policy manager at trade organisation CCIA Europe, whose members include Amazon, Google and Meta.
“If it’s too narrow or too specific, that will become very difficult,” he added.
Companies such as Stability AI and OpenAI have faced questions over whether using bestselling books or photo archives to train their models without their creators’ permission is a breach of copyright.
Under the AI Act, companies will be obliged to provide “detailed summaries” of the data used to train their models. In theory, a content creator who discovered their work had been used to train an AI model may be able to seek compensation, although this is being tested in the courts.
Some business leaders have said the required summaries need to contain scant details in order to protect trade secrets, while others say copyright-holders have a right to know if their content has been used without permission.
OpenAI, which has drawn criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person familiar with the matter, who declined to be named.
Google has also submitted an application, a spokesman said. Meanwhile, Amazon said it hopes to “contribute our expertise and ensure the code of practice succeeds”.
Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the non-profit organisation behind the Firefox web browser, expressed concern that companies are “going out of their way to avoid transparency”.
“The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box,” he said.
Some in business have criticised the EU for prioritising tech regulation over innovation, and those tasked with drafting the text of the code of practice will strive for a compromise.
Last week, former European Central Bank chief Mario Draghi told the bloc it needed a better coordinated industrial policy, faster decision-making and massive investment to keep pace with China and the United States.
Thierry Breton — a vocal champion of EU regulation and critic of non-compliant tech companies — this week quit his role as European Commissioner for the Internal Market, after clashing with Ursula von der Leyen, the president of the bloc’s executive arm.
Against a backdrop of growing protectionism within the EU, homegrown tech companies are hoping carve-outs will be introduced in the AI Act to benefit up and coming European firms.