The EU Artificial Intelligence Act (AI Act) is the European Union’s landmark regulation reshaping how AI technologies are developed and deployed, with the aim of ensuring safe and ethically sound adoption. The Act entered into force on August 1, 2024.
For organizations that use, procure, implement, or develop AI systems, it is critical to understand their obligations, and the consequences of failing to comply. Obligations are phased in over three years, with full enforcement by August 2027. The EU’s stance is clear: both EU-based organizations and non-EU entities covered by the Act’s extraterritorial reach must comply in a timely manner or face heavy financial penalties, sometimes exceeding those imposed under GDPR.
Defined Roles and Responsibilities
The AI Act precisely outlines stakeholder roles throughout the AI system lifecycle. Clarity around these roles is essential for compliance and risk avoidance:
- Providers/developers: create, supply, or place AI systems on the market. They must ensure safety, transparency, and maintain detailed technical documentation.
 
- Users (deployers): implement transparency and risk-management measures and ensure end-users are clearly informed when interacting with AI systems.
 
- Importers and distributors: verify compliance of AI systems entering the EU market, maintain documentation, and report risks or irregularities to regulators.
 
Importantly, obligations extend to non-EU organizations, for example, companies in Serbia, if they place AI models or systems on the EU market.
Risk Classification and Compliance Obligations
AI systems are categorized by risk, with tailored obligations and sanctions:
- Unacceptable risk: prohibited practices such as social scoring or manipulative techniques, subject to the strictest penalties.
 
- High risk: systems that significantly impact safety or fundamental rights (e.g., medical devices, AI in education or employment). These face stringent obligations across the entire product lifecycle.
 
- Limited risk: primarily subject to transparency requirements.
 
- Minimal risk: no mandatory measures, though voluntary codes of conduct are encouraged.
 
Example: a small EU-based startup developing an AI system to select law school applicants falls under the high-risk category (Annex III). This triggers conformity assessments, transparency obligations, and ongoing monitoring during deployment.
Penalties: Structure and Scale
Fines under the AI Act are designed to enforce consistent compliance across the EU. Modeled on GDPR, they are calculated based on the type of violation, its severity, and organizational size. The higher of a fixed amount or a percentage of global turnover applies. Special rules apply to SMEs.
- Prohibited practices (unacceptable risk): up to €35 million or 7% of global turnover.
 
- Breaches of general obligations (high and limited risk): up to €15 million or 3% of global turnover.
 
- False or misleading information to regulators: up to €7.5 million or 1% of global turnover.
 
All stakeholders including providers, deployers, importers, distributors, and even competent authorities acting as deployers, may be fined.
SMEs (small and medium-sized enterprises) face proportionally adjusted penalties: the lower of the fixed cap or the percentage of turnover applies. Even so, fines can be significant. For example: if 3% of global turnover equals €150,000 but the fixed cap is €15 million, the SME would be fined €150,000.
Special Rules for EU Institutions and Agencies
If EU institutions, bodies, or agencies violate the Act, lower fines apply compared to the private sector:
- Prohibited practices: up to €1.5 million.
 
- Other breaches: up to €750,000.
 
They retain the right to appeal before a final decision. Fines are paid into the EU budget, and enforcement/reporting falls under the European Data Protection Supervisor (EDPS).
GPAI and Systemic-Risk Models (GPAISR)
Beyond application-based categories, the Act introduces obligations for General-Purpose AI models (GPAI) and GPAI models with systemic risk (GPAISR). The European Commission may formally designate certain models as GPAISR, triggering stricter requirements and penalties.
For GPAI providers, non-compliance, such as failing to provide required information or obstructing access for evaluation, can result in fines of up to €15 million or 3% of global turnover, whichever is higher.
Decisions are subject to full judicial review by the Court of Justice of the European Union, which may annul, reduce, or increase penalties.
Preparing for Compliance and Minimizing Risk
Organizations developing, deploying, or using AI systems must act early to establish compliance frameworks.
Recommended steps:
- Map roles and classify systems: determine whether you are acting as a provider, deployer, importer, or distributor, and assess whether your systems fall under high, or limited-risk categories.
 
- Establish an AI governance framework: define policies, responsibilities, and recordkeeping processes; develop AI literacy programs aligned with roles and risk exposure.
 
- Meet high-risk system obligations: implement robust data governance (quality, representativeness, bias mitigation), prepare technical documentation, ensure transparency and human oversight, conduct post-market monitoring, and maintain readiness for regulatory engagement.
 
- Plan ahead: while a transition period exists, last-minute compliance drives up costs and risk. Early engagement with AI governance and compliance experts is strongly advised.
 
Bottom line: a proactive compliance strategy reduces the likelihood of fines, safeguards reputation, and strengthens trust among users and regulators.

															
								
								
								
								
								
								









