EU Artificial Intelligence Act

With the introduction of the Artificial Intelligence Act, the European Union aims to create a legal framework for AI to promote trust and excellence. The AI Act would establish a risk-based framework to regulate AI applications, products and services. The rule of thumb: the higher the risk, the stricter the rule. But the proposal also raises important questions about fundamental rights and whether to simply prohibit certain AI applications, such as social scoring and mass surveillance, as UNESCO has recently urged in the Recommendation on AI Ethics, endorsed by 193 countries. Because of the significance of the proposed EU Act and the CAIDP’s goal to protect fundamental rights, democratic institutions and the rule of law, we have created this informational page to provide easy access to EU institutional documents, the relevant work of CAIDP and others, and to chart the important milestones as the proposal moves forward. We welcome your suggestions for additions. Please email us.

The Center for AI and Digital Policy (CAIDP) has a good page on the EU Artificial Intelligence Act with links to different resources. I’m trying to understand this Act the network of documents related to it, as the AI Act could have a profound impact on how AI is regulated, so I’ve put together some starting points.

First, the point about the potential influence of the AI Act is made in a slide by Giuliano Borter, a CAIDP Fellow. The slide deck is a great starting point that covers key points to know.

Key Point #1 – EU Shapes Global Digital Policy

• Unlike OECD AI Principles, EU AI legislation will have legal force with consequences for businesses and consumers

• EU has enormous influence on global digital policy (e.g. GDPR)

• EU AI regulation could have similar impact

Borter goes on to point out that the Proposal is based on a “risk-based approach” where the higher the risk the more (strict) regulation. This approach is supposed to provide legal room for innovative businesses not working on risky projects while controlling problematic (riskier) uses. Borter’s slides suggest that an unresolved issue is mass surveillance. I can imagine that there is the danger that data collected or inferred by smaller (or less risky) services is aggregated into something with a different level of risk. There are also issues around biometrics (from face recognition on) and AI weapons that might not be covered.

The Act is at the moment only a proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) – the Proposal was launched in April of 2021 and all sorts of entities, including the CAIDP are suggesting amendments.

What was the reason for this AI Act? In the Reasons and Objective opening to the Proposal they write that “The proposal is based on EU values and fundamental rights and aims to give people and other users the confidence to embrace AI-based solutions, while encouraging businesses to develop them.” (p. 1) You can see the balancing of values, trust and business.

But I think it is really the economic/business side of the issue that is driving the Act. This can be seen in the Explanatory Statement at the end of the Report on artificial intelligence in a digital age (PDF) from the European Parliament Special Committee on Artificial Intelligence in a Digital Age (AIDA).

Within the global competition, the EU has already fallen behind. Significant parts of AI innovation and even more the commercialisation of AI technologies take place outside of Europe. We neither take the lead in development, research or investment in AI. If we do not set clear standards for the human-centred approach to AI that is based on our core European ethical standards and democratic values, they will be determined elsewhere. The consequences of falling further behind do not only threaten our economic prosperity but also lead to an application of AI that threatens our security, including surveillance, disinformation and social scoring. In fact, to be a global power means to be a leader in AI. (p. 61)

The AI Act may be seen as way to catch up. AIDA makes the supporting case that “Instead of focusing on threats, a human-centric approach to AI based on our values will use AI for its benefits and give us the competitive edge to frame AI regulation on the global stage.” (p. 61) The idea seems to be that a values based proposal that enables regulated responsible AI will not only avoid risky uses, but create the legal space to encourage low-risk innovation. In particular I sense that there is a linkage to the Green Deal – ie. that AI is being a promising technology that could help reduce energy use through smart systems.

Access Now also has a page on the AI Act. They have a nice clear set of amendments that show where some of the weaknesses in the AI Act could be.