EU prepares to push back on private sector carve-out from international AI treaty

xs

Brussels: The European Commission is preparing to push back on a US-led attempt to exempt the private sector from the world’s first international treaty on Artificial Intelligence while pushing for as much alignment as possible with the EU’s AI Act.

The Council of Europe, an international human rights body with 46 member countries, set up the Committee on Artificial Intelligence at the beginning of 2022 to develop the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law.

The binding international treaty, the first of its kind on AI, is facing crunch time: the current plan is to finalise it by March, with the view of adopting it at the ministerial level in May. Thus, many open questions must be solved at a plenary meeting on 23-26 January.

The most consequential pending issue regards the scope of the convention. In June, Euractiv revealed how the United States, which is participating as an observer country, was pushing for exempting companies by default, leaving it up to the signatory countries to decide whether to ‘opt-in’ the private sector.

Just as the timing of the world’s first AI treaty starts aligning with the EU legislative agenda, an American-led push to exclude private companies might make it not worth the paper it is written on.

This possibility is still on the table, as confirmed by a draft of the treaty the Council of Europe published on 18 December. Participating countries will have to decide on the different options by consensus, with some decisions already expected at an informal meeting next week.

“The Union should not agree with the alternative proposal(s) that limit the scope of the convention to activities within the lifecycle of artificial intelligence systems by or on behalf of a Party or allow application to the private sector only via an additional protocol or voluntary declarations by the Parties (opt-in),” reads an information note from the Commission, obtained by Euractiv.

The document notes that these proposals would limit the treaty’s scope by default, “thus diminishing its value and sending a wrong political message that human rights in the private field do not merit the same protection.”

The EU executive notes how this approach would contradict international law that requires the respect of human rights by private entities, and it would also fail to address the recent societal challenges and regulatory concerns raised by the development of ever-more-powerful AI systems.

As a result, the Commission wants to oppose any formulation that would not provide necessary legal certainty and leave too much discretion to the signatories on the scope, as the intent of the treaty is precisely that of setting up a human rights baseline concerning AI.

Civil society organisations have been excluded from the drafting process of the first international treaty on Artificial Intelligence based on a request of the US to avoid countries’ positions becoming public.

The Commission has a mandate to negotiate the international treaty on behalf of EU countries insofar as it overlaps with the AI Act, a landmark legislation meant to regulate Artificial Intelligence based on its capacity to cause harm.

For instance, the EU executive wants the convention to apply to the whole lifecycle of AI systems, “with the exception of research and development activities regarding artificial intelligence systems in a manner consistent with the research exception in the AI Act.”

During the AI Act discussion, one hot debate was around a national security exemption France has been pushing for in the context of the AI convention.

In this regard, the Commission is pushing for an explicit exclusion of AI systems exclusively developed for national security, military and defence purposes in a manner that is consistent with the EU’s AI law.

In revising the mandate for the European Commission to negotiate an international convention on AI, the Czech Presidency of the EU Council raised the question of whether the treaty should cover matters related to national security.

In other words, Brussels does not seem to have any appetite for the AI treaty to go beyond the AI Act, even on matters where there is not necessarily a conflict, and the convention could have been more ambitious.

A complete overlap of the international treaty with the EU regulation is not a given since the former is meant to protect human rights, while the latter is merely intended to harmonise the EU market rules following a traditional product safety blueprint.

For instance, following the path of the AI Act on law enforcement, the Commission hopes that “the convention allows the parties to derogate from the transparency-related provisions of the convention where necessary for the purposes of prevention, detection and prosecution of crimes.”

Similarly, since the AI Act bans specific applications like social scoring deemed to pose an unacceptable risk, the Commission is pushing for extending these prohibitions at the international level via a moratorium or a ban as this would “increase the added value of the convention”.

The only significant exception where the EU executive seems keen to go beyond the AI Act (but still in line with Union law) is in supporting a provision that protects whistle-blowers in the implementation of the convention – one that the UK, Canada and Estonia have opposed.