Sarah Chander, Senior Policy Advisor for European Digital Rights, was the second expert we invited for our online Conversations on Digital Rights.
The European Commission’s legislative proposal on artificial intelligence (AI), released in April 2021, has two competing aims. The first being that AI should be promoted, for Europe to become a leader in this field, so the European market can compete on a global scale. The second being that the regulation should ensure AI is something Europeans can trust to respect their fundamental rights
European Digital Rights (EDRI) is interested in this regulation because AI has the potential to impact our privacy but also other human rights such as whether one will be free from discrimination when they encounter AI systems or freedom of expression.
I think what was striking about the AI regulation is that it contained a list of prohibitions on certain uses of AI, defined as “unacceptable uses” by the Commission. It is a huge win because it was not necessarily foreseen in original plans, despite it being called for by EDRI and 61 other organisations who pushed for red lines to the use of AI to be included in the proposal.
“The very fact that this regulation includes some prohibitions to the uses of AI is a huge positive step”
The uses we outlined were, for instance, those that would facilitate mass surveillance, but also those aimed at determining access to the most essential public goods like education or social security services. A red line was also called for for what concerns predictive policing, based on systems using historical data to determine where and by whom crime will be committed in the future. All these uses were not included in the Commission’s regulation but the very fact that this regulation includes some prohibitions to the uses of AI is a huge positive step.
However generally, some aspects of the text could be better. Some uses of AI are harmful, as pointed out by civil society, including antiracist, LGBTI+ and fair trial organisations. These harmful uses include predictive policing, the use of algorithms to determine the rates of recidivism and uses of AI in the context of migration control, at borders.
There is a big question mark as to whether the EU’s rules can help us with those uses. The EU has indeed set out a categorisation of what constitutes a “high risk” AI system, including the use of AI in certain criminal justice contexts, AI in the educational context, as well as various systems used to surveil workers. Therefore, developers of such systems must go through a series of checks before their systems go onto the market. These checks include rules on transparency, ensuring that there is sufficient and accurate data quality, that the system is robust amongst some other uses.
But in doing so, the EU is placing a lot of responsibility with the companies themselves. This is problematic for several reasons but one of them is entrusting the companies to self-assess these systems’ compliance with the law. Beyond that, the EU’s standards are also quite vague. For example, article 10 of the data quality requirement of providers will say things such as “ensuring reasonable assumptions and accuracy”. Both aspects mean that there is a huge gap in terms of protection. There is a question mark, particularly amongst civil society, as to whether these rules are sturdy enough to ensure fundamental rights.
“There is a deeper story as to why there are quite soft rules when it comes to the use of AI in law enforcement and migration control”
At present, we have a framework that says what AI systems go to market, but not what AI systems are put into use. And many within civil society have been arguing that you need checks and balances at the points of use. So that if, for instance, police force says they want to use a certain system, there should be several obligations and assessments that this police force should go through before it starts using such systems.
But in fact, there is a deeper story as to why there are quite soft rules when it comes to law enforcement and migration control. As we have seen in other areas, such as the EU’s proposed Europol update, the EU suggests there are many benefits to using AI in these fields. We come to a point of trade-off: the EU wants to promote these uses but there are severe fundamental rights risks, including a risk of discrimination against racialised communities, and that of undermining the dignity of migrants travelling to Europe. How do we come to a balance here?
At EDRI and beyond, we are pushing to have a more rounded conversation, now the AI regulation is with the European Parliament, about the fact that there are very few checks, and most of these checks are self-assessment.
The outcome will depend on the extent to which the Parliament considers a wide range of uses and perspectives. We saw an ex-boss of Google speak at a summit on AI organised by Politico recently, saying the EU was too strict on AI systems and the fundamental rights they safeguard were not conducive to innovation. Going forward, the Parliament should think about how they speak to a wide range of people about AI including civil society, digital rights organisations and organisations working on disability issues, among others. Our parliamentarians need to make sure that their decisions on the regulation are reflecting the human rights of all those groups, and not just the claims made by big business saying we need more innovation in AI.
. . .
Photo by Andreea Belu.
. . .
The Coppieters Foundation is financially supported by the European Parliament. The European Parliament is not liable for the content of the conferences, events or opinion pieces published on our website.
. . .
Thank you for following our activities over the past few years. We hope our updates have been useful to you. We would like to keep informing you about upcoming events, new publications, summer schools, and job vacancies. Subscribe to our newsletter to hear from us in your inbox.