8 min read

Share this Blog

Rate this Post

Get Ready for The EU AI Act

Jelena Djukanovic

Jelena Đukanović

Senior Associate

30/05/2024

Artificial intelligence has long been a hot topic that attracts public attention, both for its potential and for the issues of citizens’ rights and protection related to its application. Having that in mind, it was only a matter of time before comprehensive legal regulations would be adopted to govern this area, and the wait has finally come to an end as the European Union Artificial Intelligence Act has been adopted!

After long negotiations, in December 2023, an agreement was reached between the European Parliament and the Council of Europe on the draft of the AI Act, and its final version was adopted on May 21, 2024, by the Council of the European Union

The implementation of this European Union project is well underway, and in January this year, the European Commission established the AI Office, and an official website for the EU AI Act (hereinafter:AI Act) was created where you can learn more about this “revolutionary” regulation.

The text of the AI Act is expected to be published in the coming days in the Official Journal of the EU and will come into force twenty days after publication. The first set of legal obligations is expected to come into effect by the end of 2024. With this in mind, it is certain that organizations that use or plan to use artificial intelligence systems should get familiar with the innovations and obligations introduced by the AI Act and conduct an assessment of the compliance level of their AI systems with the new rules to be ready for its implementation.

What are Artificial Intelligence Systems?

The AI Act establishes rules for the use of AI systems, introduces prohibitions or specific requirements related to certain types of these systems, as well as measures to support innovation with a particular focus on small and medium-sized enterprises.

Accordingly, it is first necessary to determine what AI systems are to be able to conclude the scope of the Act’s application. Instead of burdening you with the long legal definition of “AI systems,” we list below some examples of artificial intelligence systems:

  • Recommendation Systems: Such as those used by streaming services like Netflix or Spotify, which suggest movies, TV shows, or music based on user preferences and past behavior.
  • Autonomous Vehicles: Self-driving cars that interpret data from sensors to make decisions about navigation, speed, and obstacle avoidance.
  • Chatbots and Virtual Assistants: AI-driven conversational agents like Siri, Alexa, or customer service chatbots that provide responses and recommendations based on user input.
  • Content Generation Tools: AI-driven systems that generate text, images, or videos, such as GPT-3 for text generation or tools that create artwork or music.
  • Smart Home Devices: Devices like smart thermostats or security cameras that learn user behavior and make decisions to optimize home settings or alert owners to unusual activities
  • Biometric Systems: Systems that use personal characteristics (directly linked to who you are) for authentication or identification of a person.
  • AI Systems Used for Recruitment or Candidate Selection: Used for analyzing and filtering applications.

We emphasize that these are just some of the many types of AI systems, and new applications of artificial intelligence are constantly being developed.

Who Does the AI Act Apply To?

The AI Act introduces a broad range of categories of entities to which it applies, such as:

  • AI Providers: Entities that develop AI systems independently or with the help of others;
  • AI Deployers: Entities that use AI systems (excluding personal use).
  • Importers and distributors of AI systems;
  • Manufacturers of AI systems;
  • persons affected by AI systems that are located in the EU.

Therefore, if an individual or company falls into one of the categories mentioned above, they are subject to the AI Act and the obligations it introduces.

Additionally, similar to the GDPR, the AI Act has both territorial and extraterritorial application. In other words, although it is an act of the European Union, its provisions will, under certain circumstances, apply to individuals and companies from non-EU countries.

Specifically, besides AI providers and users located in the European Union, the AI Act also applies to:

  • Providers in a third country that place AI systems on the market or put them into use in the European Union;
  • Providers and users of AI systems who are established in or are located in a third country where the output generated by the AI system is used in the European Union.

Example: An IT company based in Serbia (or any other country outside the EU) develops an AI-based software solution for HR recruitment. The product will be available and offered to companies and users in the EU. That IT company from Serbia will be obliged to comply with the AI Act, and the level of obligations will depend on the risk level of the AI system (as explained below).

Accordingly, with certain exceptions (e.g., systems used exclusively for military, defense, and research purposes), the AI Act applies to:

  • Both the private and public sectors,
  • Entities within and outside the EU,
  • and it Establishes a range of obligations for all actors involved in the development, application, and use of artificial intelligence, from providers to importers and distributors to users of AI systems.

 

AI Act’s Approach to Artificial Intelligence

The AI Act adopts a risk-based approach to the use of artificial intelligence. This practically means that the obligations and legal measures are aligned with the level of risk that a particular AI system poses to the rights and freedoms of individuals and society in general.

The AI Act’s approach is based on identifying 4 risk levels with obligations tailored to those levels. Risks can be classified as (1) unacceptable, (2) high, (3) transparency, and (4) minimal (which are not subject to regulation).

(1) For artificial intelligence systems deemed to have an unacceptable risk, the Act mandates their prohibition. Examples of unacceptable AI systems cited by the European Parliament[1] include:

  • Voice-activated toys that encourage dangerous behavior in children,
  • Systems for classifying people based on behavior, socio-economic status, or personal characteristics,
  • Real-time and remote biometric identification systems, such as facial recognition (allowed only under certain exceptions).

(2) For high-risk systems, the AI Act introduces conditions that these systems must meet, such as risk management, data quality and security, transparency, documentation of performance, maintenance and updates, human oversight, and accuracy. Organizations providing or implementing these systems will face obligations like registration, quality management, system testing, monitoring, record-keeping, and incident reporting.Examples of high-risk systems include those used in education, employment, essential private and public services, law enforcement, border control management, and justice administration.

(3) As the third category of regulated AI systems, the AI Act recognizes limited-risk systems for which it imposes obligations in terms of ensuring transparency. This category includes, for example, customer support chatbots that provide automated responses to user inquiries.

(4) Finally, other AI systems that the AI Act does not specifically regulate are considered low-risk, and the Act does not impose special obligations for their development, testing, or use. This category includes, for example, email filters that separate spam emails. However, EU authorities encourage the voluntary compliance of non-high-risk AI systems with the regulations and measures prescribed by the AI Act.

Conclusion: The higher the risk, the greater the obligations and consequences for non-compliance.

Additionally, the AI Act specifically regulates general purpose artificial intelligence models (GPAI), i.e., AI models capable of competently performing a wide range of different tasks regardless of how the model is marketed and can be integrated into various systems or applications (such as the widely used ChatGPT). For this type of AI model, the Act introduces specific obligations for their providers and users, which include:

  • Creating and providing detailed technical documentation of the model to the supervisory authority upon request,
  • Providing information to users to understand the capabilities, limitations, and construction of the model and making a sufficiently detailed summary of the content used for training the model publicly available,
  • Maintaining adequate cybersecurity protection, conducting model evaluations,
  • Appointing a representative in the EU for providers from third countries who place their AI systems on the EU market.

For more information on the AI systems classification according to their risks, obligations, and measures these systems will be subject to, read our blog Exploring the EU AI Act.

What is the Timeline for Compliance with the AI Act?

The AI Act will come into force 20 days after its publication in the Official Journal of the European Union, and its publication is expected in June this year.

Full implementation of the AI Act begins 24 months after the Act comes into force, with the exception of certain provisions. The planned timeline for the implementation of the AI Act is as follows:

Legal Obligation

Deadline

Expected start date

  • Prohibition of AI systems deemed to be of unacceptable risk

6 months

End of 2024/early 2025

   
  • Provisions related to general purpose AI models
  • Formation of the European Artificial Intelligence Board and the Scientific Panel of Independent Experts and appointment of competent authorities
  • Designation of authorities and bodies for notification
  • Provisions related to confidentiality and penalties

12 months

Mid-2025

   
  • Remainder of the AI Act, except the part that applies 36 months after publication

24 months

Mid-2026

   
  • Implementation of conditions for high-risk AI systems

36 months

Mid-2027

   
  • For certain systems placed on the market or put into use before the expiration of 12 months after the start of application, but which are components of large-scale IT systems established by EU law in the areas of freedom, security, and justice

Until the end of 2030

How to Prepare for the Implementation of the AI Act?

Given the complexity of the AI Act and the numerous new obligations it introduces, a proactive approach by entities and organizations regarding compliance is recommended, as well as timely implementation of appropriate measures to be ready for the application of the new regulation.

To ensure compliance, we recommend taking the following steps:

1) Mapping AI Systems

To determine whether you will be subject to the AI Act, you need to conduct a detailed mapping of all AI systems you are currently developing, marketing, or using, or planning to use in the future.

During the mapping, you should determine who these systems are intended for, what their capabilities are, and what role your organization plays in relation to these AI systems (whether you develop them, use them, plan to market them in the European Union, etc.).

Additionally, to determine if the AI Act will apply to your organization, you need to understand which risk category these systems will fall into.

2) Determining Obligations Under the AI Act

After determining that the AI Act will apply to all or some of your organization’s AI systems, you need to familiarize yourself with the Act’s obligations and establish which obligations pertain to you to proceed to the next steps. As previously explained, the extent of the obligations will depend on your role and the estimated risk of the AI system.

3) Compliance Assessment

The next step in preparing for the upcoming AI Act is to conduct a detailed analysis of the areas where your current systems, processes, and controls do not meet the AI Act’s requirements and identify the steps needed to eliminate any non-compliance.

This assessment should include:

  • Identifying the differences between your current operations and the new legal requirements;
  • Identifying gaps and deficiencies in your organization, or business areas and parts of the system that need to be supplemented, improved, or changed;
  • Creating a compliance plan that will include a risk assessment and proposed measures to mitigate those risks, including adopting a list of documentation to be created, as well as all strategic changes to be implemented with deadlines (e.g., adapting AI systems to be suitable for use, withdrawing or redesigning certain tools, etc.);
  • Determining the scope of resources that may need to be allocated to implement the compliance plan;
  • Developing a legal framework for AI governance that suits your business environment.

4) Implementing Security Measures

After identifying deficiencies and creating a compliance plan, the next step is to implement the plan and introduce basic measures to mitigate the impact of all identified risks.

For systems that fall into the high-risk category, these measures may include:

  • Ensuring human oversight mechanisms,
  • Establishing transparency requirements to inform users about how decisions are made by AI systems,
  • Establishing risk management procedures, including testing and documenting data quality,
  • Implementing security measures to protect the system from potential attacks and establishing procedures in case of system security breaches,
  • Establishing training programs and raising awareness to familiarize employees with the requirements and implications of the AI Act,
  • Investing in expertise and gaining knowledge in the field of AI compliance regulations.

These measures not only help meet regulatory requirements but also serve to preserve the security of your systems and business, promote a good reputation, and build trust among users and stakeholders.

5) Adopting Appropriate AI Policies

Now is the right time to establish comprehensive guidelines and rules relevant to the integration of AI into various functions within your organization, regardless of the industry and your role in applying or marketing AI systems.

Internal consultations with sectors such as IT, security, legal, and human resources are crucial for developing internal policies and regulations. It may also be necessary to adjust existing policies to comply with the new AI regulation requirements.

When drafting AI policies, consider integrating people, processes, and technology, addressing aspects such as:

  • Role-based access systems for AI systems or their use within your organization.
  • Regulation of ownership issues over input data and intellectual property rights.
  • Regulation of confidentiality and measures to protect confidentiality when using or developing AI systems.
  • Regulation of data protection issues if personal data is processed through AI systems.
  • Defining procedures to ensure the accuracy, quality, and legality of AI systems.
  • Defining responsibilities and consequences for non-compliance with policies and procedures.

6) Monitoring Regulatory Developments and Practices in AI

To properly prepare for the implementation of the AI Act, it is not enough to be familiar with the provisions of the AI Act itself. It is also necessary to be informed about the latest developments regarding the adoption of regulations and accompanying acts, guidelines, and instructions issued by supervisory authorities and the expert public on this matter, including their amendments and updates.

By staying updated with news and developments in AI regulations and practices, you can proactively adjust your strategy and ensure compliance with evolving regulations, gaining a competitive advantage over other market participants.

If you want to stay updated with all the novelties in the legal world, including developments regarding AI regulation, you can subscribe to the Zunic Law Firm Newsletter.

What are the Consequences of Violating the AI Act?

The consequences of violating or failing to comply with the AI Act threaten to surpass and overshadow the fines imposed for GDPR violations.

Fines for non-compliance with the AI Act can vary depending on the type of infringement. The strictest penalties are reserved for non-compliance with the prohibition on the use of certain AI systems, amounting to up to EUR 35 million or up to 7% of the total worldwide annual turnover, whichever is higher.

For non-compliance with most other provisions, the Act prescribes fines of up to EUR 15 million or up to 3% of the total worldwide annual turnover, whichever is higher.

Since member states have already begun establishing supervisory bodies (with Spain taking the first step) and the EU AI Office has been established, it is evident that the relevant authorities are already preparing for the implementation and enforcement of the AI Act.

Therefore, it is essential to prepare and ensure in time that your business and practices regarding the development, testing, application, and use of artificial intelligence comply with the AI Act. Otherwise, once the AI Act comes into force, you may face all the consequences of non-compliance with the regulations, including substantial fines and damage to your organization’s reputation.

[1] https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Similar Articles

Latest Articles

Ready to get started?

If you are not sure about what the first step should be, schedule consultations with one of our experts.

techlawafficiendo

privacywhisperer

cryptobuddy

evergreen

Not Just Another Newsletter

Forget boring legal analysis and theory. Receive timely updates,
news and reminders that can actually help your business.