At the end of 2022, the Council of the European Union adopted its common position or ‘general approach’ to the proposed Artificial Intelligence Act (AI Act), paving the way for a vote on the legislation in the Committees of the European Parliament in April 2023. The Act stands to be the world’s most significant piece of AI regulation to date, going further in advancing human rights protection than more libertarian regulatory models seen elsewhere. Unlike the GDPR, the AI Act will not directly confer rights on individual citizens but will instead regulate the use of AI in the provision of products and services.

Given its extraterritorial nature and potential to become a GDPR-like global standard, the AI Act has potentially far-reaching human rights implications. This blog post will seek to address some of the areas where the Act could go further in protecting human rights.

‘Deep fakes’

Of note is the Act’s approach to AI systems used to ‘generate or manipulate content’, better known as ‘deep fakes’. Central to the legislation is the right of consumers to make informed choices by knowing when they are interacting with a piece of AI rather than a human being. As such, Article 52 would impose an obligation on the creators of deep fakes to disclose that content has been ‘artificially generated or manipulated’, except where ‘necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences’.

It remains concerning, however, that while the Act designates systems capable of detecting deep fakes as ‘high risk’, deep fakes themselves are categorised merely as ‘limited risk’. As such, there is no explicitly provided sanction for non-compliance with the disclosure obligation. The privacy risks raised by this designation are particularly concerning given their gendered dimension: as many as 80% of deep fake videos comprise non-consensual pornographic content involving women. By designating this content as of merely ‘limited risk’, the Act fails to confront the substantive equality implications of AI and the structural inequalities that permit such privacy incursions.

Biometric identification

While prohibiting the use of real-time biometric identification systems such as facial and voice recognition in public places, the Act features a wide number of exceptions for law enforcement purposes. Notably, the ban on real-time use does not apply where it is deemed ‘strictly necessary’ in situations involving missing children, terrorist attacks, and when dealing with suspects of certain offences with a maximum detention period of more than three years.

Although a distinction is drawn between real-time and ex post use of biometric recognition systems, in practice, the differences between the two are limited. This is because authorities can simply collect biometric data in real-time before searching against databases after the event.

With its broad exceptions to real-time use and its lack of provision for ex post use, the Act risks not going far enough to prevent citizens from becoming ‘walking ID cards’. And while the Act takes account of biometric identification by public actors, it is concerning that there remains a lack of provision on the use of biometric identification systems by private actors in public spaces.

Open-source AI

Concerns have also been raised over the Act’s potential chilling effect on the creation of open-source AI. The risk that open-source developers could be exposed to liability for AI systems derived from their work by big tech companies jeopardises not only technological development but also individuals’ cultural rights. This is due to the importance of open-source software in the freedom to take part in cultural life and enjoy the benefits of scientific advancements under the ICESCR. Access to open-source software also helps to identify biases and flaws in the use of AI by big tech firms, exposing and preventing future rights violations. Given the link between open-source software and cultural rights, the Act ought to go further in protecting the rights of open-source AI developers.

Given that, at the time of writing, the scheduled vote in the European Parliament is only weeks away, it remains to be seen which amendments are made before Members cast their votes. In any case, recent news that the arrival of AI chatbot ChatGPT forced EU lawmakers to reconsider aspects of the legislation suggests that the AI Act remains far from the finished article.