In a landmark decision in June 2023, the European Parliament cast its vote on the European Commission’s proposal for the EU Artificial Intelligence Act, marking a significant stride in the regulation of artificial intelligence (AI) within the EU. This pioneering move aligns with the broader European efforts to safeguard personal data, as exemplified by the General Data Protection Regulation (GDPR). Acknowledging the manifold benefits AI brings to everyday life, EU representatives recognized the imperative to establish binding regulations, a necessary step to temper potential risks. These risks encompass privacy and security concerns, opacity in AI systems, as well as issues of bias and discrimination, all posing threats to fundamental rights safeguarded by the EU Charter of Fundamental Rights and the European Convention on Human Rights.
As the negotiations for the final version of the EU AI Act are in progress, it’s crucial to delve into the realm of AI regulations both within and outside the EU. This involves an exploration of the adoption process of the EU AI Act, its scope, and the stipulations it imposes on providers and users of AI systems. By weaving together the narrative of AI governance and the EU’s commitment to data protection, we can unravel the intricate tapestry of regulations that seek to balance the innovation of AI with the preservation of individual rights.
A World in Flux and the Journey to AI Regulation
Even outside the EU, other countries are becoming more aware of the importance of regulating AI. Up until recently in the United States of America (USA), there were no obligatory rules regulating this matter. The only regulation was the American National Institute of Standards and Technology’s (NIST) AI Risk Management Framework which can be used as a guide by the providers of AI systems for responsibly developing these systems and making their products as trustworthy as possible. On October 30, 2023, the American President issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence which, among other things, set requirements for AI developers to share their safety results and other critical information with the US government. Meanwhile, China’s Cyberspace Administration is considering AI regulation proposals, and the UK is crafting a framework based on principles that encourage innovation.
On the international front, the Organisation for Economic Co-operation and Development (OECD) introduced a non-binding Recommendation on AI in 2019. UNESCO also adopted Recommendations on the Ethics of AI in 2021, while the Council of Europe is presently in the process of developing an international convention regulating artificial intelligence.
On November 1st, 2023, world leaders gathered in the UK to address the pressing issue of AI. They signed the Bletchley Declaration on AI Safety which outlines the “urgent need to understand and collectively manage potential risks through a new joint global effort.” Subsequent meetings in periods of 6 months have already been arranged in South Korea and France.
What Falls Within the EU AI Act's Reach?
In previous years, determining the rules governing artificial intelligence was one of the hot topics among representatives of the European Union. Although the first documents dealing with this problem, such as the White Paper on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, and Policy and Investment Recommendations, were non-binging, policy-makers took the position that at this moment when the development of AI systems is happening at an incredible speed, it is necessary to introduce obligatory rules as well. The leading reasons why the European Commission proposed the adoption of the EU AI Act are the determination of conditions for the use of AI systems on the European Union market, avoiding the fragmentation of that market on the territory of the EU, encouraging the development of AI systems while minimizing risks to citizens’ rights, as well as the enforcement of these rules.
This act aims to determine risk-based conditions for AI systems. In practice, this means that AI systems will be categorized into several groups according to how much risk they pose to society. If it turns out that AI systems have a high level of risk, stricter conditions and controls will be set for them, or even their use may be prohibited. On the other hand, less risky systems will need to meet more lenient transparency requirements.
AI Act will also attempt to legally define AI systems despite the fact that the world of science has not yet reached an agreement on any proposed definition of AI systems. Can lawyers do a better job than scientists? We are quite sceptical.
Who will be subject to the rules of the EU AI Act?
The envisaged regulations primarily focus on AI system providers within the EU and extend their reach to those in third countries introducing AI systems to the EU market or utilizing them within EU borders. In essence, these regulations exhibit both territorial and extraterritorial applicability, mirroring the approach established by the GDPR.
To prevent any attempts to bypass these rules, the new regulations would also encompass providers and users of AI systems located in third countries if the AI systems’ outputs are utilized within the EU. However, it’s important to note that the draft regulation does not extend to AI systems exclusively designed or used for military purposes, public authorities in third countries, or international organizations. The exemption also covers authorities using AI systems within the scope of international agreements related to law enforcement and judicial cooperation.
Risk Assessment in Focus: Which Category Does Your Business Belong To?"
The draft of the AI act adopts a risk-based strategy in which legal measures are specifically aligned with the degree of risk involved. To achieve this, the initial AI act makes distinctions between the following categories:
1) Unacceptable risk
If the AI system has an unacceptable risk level, meaning that it poses an obvious threat to people’s rights and safety, it will be prohibited on the EU market. These AI systems include:
- AI systems that deploy harmful manipulative ‘subliminal techniques’;
- AI systems that exploit specific vulnerable groups (physical or mental disability);
- AI systems used by public authorities, or on their behalf, for social scoring purposes;
- ‘Real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in a limited number of cases such as “post” remote biometric identification systems where identification occurs after a significant delay to prosecute serious crimes but only after court approval.
2) High risk
AI systems that may affect people’s rights but with lower threat levels than the previous group are categorized as high-risk systems. For an example systems used in products falling under the EU’s product safety legislation (e.g. toys, aviation, cars, medical devices, lifts) are considered to have a high-risk level. Besides that, this category includes the list of eight specific areas that can be updated by the European Commission if it deems it necessary. The current list reads:
- Biometric identification and categorization of natural persons;
- Management and operation of critical infrastructure;
- Education and vocational training;
- Employment, worker management, and access to self-employment;
- Access to and enjoyment of essential private services and public services and benefits;
- Law enforcement;
- Migration, asylum, and border control management;
- Administration of justice and democratic processes.
High-risk AI systems will potentially be subject to the following measures:
- Obligatory registration of the AI system in an EU database governed by the European Commission prior to placement on the market for the providers governed by the EU rules / Self-assessment of the provider’s system compliance with EU AI Act for the providers not governed by EU rules.
- Compliance with several requirements related to matters such as cybersecurity, data protection, and risk management.
- Non-EU providers will need to authorize a representative in the EU to ensure the system’s compliance with the AI Act.
3) Limited risk
AI systems with limited risks should adhere to basic transparency standards enabling users to make informed decisions. Following interactions with the applications, users can then choose whether to proceed with their usage. It’s essential to notify users when they are engaging with AI, encompassing AI systems involved in generating or altering image, audio, or video content, such as deepfakes, for instance.
4) Low or minimal risk
Other AI systems with low or minimal risk may be developed and utilized in the EU without being subject to additional legal obligations. Nonetheless, the proposed AI act contemplates the establishment of codes of conduct to motivate providers of non-high-risk AI systems to willingly adopt the obligatory requirements applicable to high-risk AI systems.[1]
“Stalemate Over Foundation Models: France, Germany, and Italy Challenge EU’s Draft AI Legislation”
France, Germany, and Italy have become key players in a deadlock over a crucial aspect of the EU’s proposed AI legislation. At the heart of the dispute is the regulation of “foundation models” — sophisticated AI systems known for their ability to generate diverse outputs and serve as independent entities or integral components for other applications. The trio contends that imposing strict regulations on these models could impede Europe’s progress in AI technology. Instead, they advocate for a regulatory framework that fosters innovation and competition, allowing European companies like Mistral in France and Aleph Alpha in Germany to shine on the global AI stage. Their proposed strategy involves self-regulation through company commitments and codes of conduct. However, critics argue that such an approach may leave the EU with unenforceable rules for the most powerful and potentially harmful AI systems, raising concerns about the overall efficacy of regulation. The stark differences in perspectives on this issue have created a deadlock that, if unresolved, could jeopardize the entire negotiation process for the Artificial Intelligence Act.
Governance, enforcement, and sanctions
Although negotiations are still ongoing on whether the rules prescribed by the AI Act will be mandatory and on the manner of implementing the provisions prescribed by law, the draft proposes the following measures:
- Member States would be required to appoint competent authorities, including a national supervisory authority, to oversee the enforcement of the regulation;
- The European Artificial Intelligence Board would be established at the EU level;
- National market surveillance authorities would evaluate operators’ adherence to high-risk AI system obligations, with access to confidential information. They would enforce corrective measures (prohibit, restrict, withdraw or recall AI systems) for non-compliance, and in case of persistent issues, Member States must intervene;
- Administrative fines, up to €30 million or 6% of the total worldwide annual turnover, are planned for violations of the AI Act, with Member States responsible for enforcement.
The upcoming discussions, scheduled for December 6, 2023, will focus on resolving remaining issues crucial for EU representatives to agree upon. These include:
- exemptions for law enforcement, especially regarding facial recognition technologies employed by national authorities,
- the regulation of foundation models, and
- addressing governance and enforcement challenges.[2]
Despite existing disagreements, it remains imperative to enact a binding law regulating artificial intelligence, including foundation models. This is essential to safeguard the fundamental rights of citizens while ensuring that such legislation does not impede the advancement of technologies that hold significant potential benefits for humanity in the EU and wider.