7 min read

Share this Blog

Rate this Post

AI Act Timeline – The Countdown Starts!

Aleksandra Jacimovic

Aleksandra Jaćimović

Senior Associate

22/07/2024

As temperatures rise this summer, so does anticipation in the legal realm, with the EU AI Act set to make its grand entrance as the new rock star, starting its countdown to entry into force on August 1, 2024.

Artificial Intelligence is poised to revolutionize numerous sectors globally, bringing transformative benefits and unprecedented challenges. The European Union (EU), recognizing the pivotal role of AI, has established a comprehensive regulatory framework to address these challenges and harness the potential of AI responsibly.

The EU AI Act, published in the Official Journal of the EU on July 12, 2024, marks a significant milestone in this endeavor.

This blog delves into the key provisions, timelines, and implications of the EU AI Act, exploring how it sets a benchmark for AI regulation worldwide.

 

Why is the AI Act Important for Companies

 

The introduction of the AI Act marks a significant shift in how companies must approach the development, deployment, and use of artificial intelligence.

Companies must brace for increased accountability at every stage of the AI lifecycle.

Violations of the AI regulations can have far-reaching consequences, potentially affecting managers and CEOs personally. This heightened accountability means that developers must program with the expectation of regulatory scrutiny, ensuring that their AI systems adhere to the stringent standards set forth by the Act.

By doing so, companies can mitigate risks, foster trust, and harness the transformative potential of AI responsibly.

 

What are the Penalties for Violating the AI Act

 

As described in our previous blog, the repercussions of breaching or not adhering to the AI Act are poised to exceed and eclipse the penalties enforced for GDPR violations.

The AI Act introduces stringent penalties for non-compliance, emphasizing the serious nature of adhering to its regulations.

Companies found in violation could face fines of up to EUR 35 million or 7% of their global annual turnover for the previous financial year, whichever amount is higher.

Furthermore, for non-compliance with most other provisions, the Act prescribes fines of up to EUR 15 million or up to 3% of the total worldwide annual turnover, whichever is higher.

These significant penalties underscore the importance of following the AI Act’s guidelines, encouraging businesses to prioritize ethical and responsible AI practices to avoid substantial financial repercussions.

 

Entry into Force and Application Timeline

 

The EU AI Act enters into force 20 days after its publication, on August 1, 2024.

However, its provisions will be applied in a staggered manner over the following years. This phased implementation ensures a smooth transition and allows stakeholders to prepare for compliance effectively.

The majority of the provisions will come into application on August 2, 2026, 24 months after the entry into force.

As of 2 August 2026, 24 months after the Act enters into force, several critical obligations will come into effect. These obligations specifically target high-risk AI systems listed in Annex III, which include applications in biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration, and the administration of justice.

By this date, member states are required to have implemented rules on penalties, including administrative fines, for non-compliance with these regulations. Additionally, member state authorities must have established at least one operational AI regulatory sandbox to facilitate the safe and innovative development of AI technologies. The European Commission will also review and possibly amend the list of high-risk AI systems to ensure it remains relevant and comprehensive.

Nonetheless, several critical provisions have different timelines, reflecting the urgency and complexity of specific aspects.

 

Exceptions to the General Timeline

 

1. Prohibitions on Unacceptable Risk AI:

The provisions regarding prohibitions on AI systems deemed to pose unacceptable risks to safety, fundamental rights, and public interests will come into application six months after the entry into force, on February 2, 2025. This early implementation underscores the EU’s commitment to mitigating the most significant risks associated with AI technologies promptly.

2. Obligations for Providers of General-Purpose AI Models:

Provisions concerning the obligations for providers of general-purpose AI models will enter into application 12 months after the entry into force, on August 2, 2025. By then, the member states should appoint their competent authorities. Furthermore, the Annual Commission would review the possible legislative amendments to, the list of prohibited AI.

This period allows providers sufficient time to adapt to the new requirements and for authorities to establish the necessary oversight mechanisms.

3. Post-Market Monitoring:

The European Commission will implement acts on post-market monitoring 18 months after the Act enters into force, on February 2, 2026.

This provision ensures ongoing vigilance and accountability for AI systems after they have been deployed, reinforcing the EU’s emphasis on continuous oversight.

4. The rest of the provision related to High-Risk AI Systems:

Thirty-six months after the Act enters into force, on August 2, 2027, obligations will take effect for high-risk AI systems that are not prescribed in Annex III but are intended to be used as safety components of products. Additionally, obligations will go into effect for high-risk AI systems in which the AI itself is a product and requires a third-party conformity assessment under existing specific EU laws, such as toys, radio equipment, in vitro diagnostic medical devices, civil aviation security, and agricultural vehicles.

5. Large-Scale Information Technology Systems:

By the end of 2030, obligations will apply to certain AI systems that are components of large-scale information technology systems established by EU law in areas such as freedom, security, and justice, including the Schengen Information System. This extended timeline reflects the complexity and critical nature of these systems, ensuring that the integration of AI is thoroughly vetted and secure.

 

Delegated Acts by the European Commission

 

The EU AI Act grants the European Commission the authority to issue delegated acts on several critical areas, enhancing the flexibility and adaptability of the regulatory framework. Delegated acts are non-legislative acts that amend or supplement non-essential elements of the legislation. The Commission’s power to issue these acts lasts for an initial period ending on August 2, 2029, with the possibility of extension for another five years.

Key areas for delegated acts include:

  • Definition of AI systems
  • Criteria and use cases for high-risk AI
  • Thresholds for general-purpose AI models with systemic risk
  • Technical documentation requirements for general-purpose AI
  • Conformity assessments
  • EU declaration of conformity.

 

These delegated acts allow the Commission to refine and update the regulatory framework in response to technological advancements and emerging risks, ensuring that the EU AI Act remains relevant and effective over time.

 

Codes of Practice and Guidance

 

The EU AI Act emphasizes the importance of practical and actionable guidelines to facilitate compliance and foster a shared understanding among stakeholders.

The AI Office is tasked with drawing up codes of practice covering obligations for providers of general-purpose AI models. These codes of practice should be ready by May 2, 2025, and provide at least a three-month period before taking effect, allowing stakeholders to prepare adequately.

Furthermore, the European Commission can issue guidance on various aspects to support the implementation of the Act:

  • High-risk AI incident reporting by August 2, 2025
  • Practical implementation of high-risk AI requirements, with a list of practical examples of high-risk and non-high-risk use cases by February 2, 2026
  • Prohibited AI practices “when deemed necessary”
  • Application of the definition of an AI system “when deemed necessary”
  • Requirements for high-risk AI systems “when deemed necessary”
  • Practical implementation of transparency obligations “when deemed necessary”
  • Relationship of the AI Act and its enforcement with other EU laws “when deemed necessary”

 

This comprehensive guidance ensures clarity and consistency in the application of the EU AI Act, addressing potential ambiguities and fostering a unified approach across member states.

 

Implications for AI Development and Deployment

 

The EU AI Act sets a precedent for AI regulation globally, with far-reaching implications for AI development and deployment. By establishing a robust regulatory framework, the EU aims to foster innovation while ensuring safety, transparency, and accountability. The Act’s focus on high-risk AI systems reflects a risk-based approach, targeting the most significant threats without stifling innovation in lower-risk applications.

1. Encouraging Responsible Innovation:

The Act encourages responsible innovation by setting clear standards and expectations for AI systems. This regulatory certainty can foster trust and confidence among consumers and businesses, promoting the adoption of AI technologies in various sectors.

2. Enhancing Safety and Accountability:

By mandating rigorous assessments and continuous monitoring of high-risk AI systems, the Act enhances safety and accountability. These measures help prevent harm and ensure that AI technologies are used ethically and responsibly.

3. Aligning with Global Standards:

The EU AI Act aligns with international efforts to regulate AI, contributing to the establishment of global standards. By leading the way in AI regulation, the EU can influence global policies and practices, promoting a harmonized approach to AI governance.

4. Supporting Market Competitiveness:

The Act supports market competitiveness by leveling the playing field and ensuring that all AI developers and providers adhere to the same standards. This can prevent unfair advantages and foster healthy competition, driving innovation and growth.

 

Conclusion

 

The EU AI Act represents a landmark in the regulation of artificial intelligence, setting a comprehensive and forward-looking framework for the development and deployment of AI technologies. Its phased implementation, combined with delegated acts, codes of practice, and guidance, ensures a balanced approach that promotes innovation while safeguarding public interests. As the world grapples with the transformative potential of AI, the EU AI Act serves as a keystone regulation, offering valuable lessons and insights for policymakers and stakeholders globally.

 

The first step

 

The initial step in complying with the AI Act is for companies to determine which regulatory group they belong to.

This classification is crucial as it dictates the specific requirements and obligations a company must adhere to under the Act.

By accurately identifying their group, companies can tailor their compliance strategies to meet the relevant standards, ensuring they are well-prepared to navigate the complexities of the new regulatory landscape.

Similar Articles

Latest Articles

Ready to get started?

If you are not sure about what the first step should be, schedule consultations with one of our experts.

techlawafficiendo

privacywhisperer

cryptobuddy

evergreen

Not Just Another Newsletter

Forget boring legal analysis and theory. Receive timely updates,
news and reminders that can actually help your business.