9 min read

Share this Blog

Rate this Post

Comparing the obligations under the GDPR and AI Act – where is the overlap?

22/10/2024

Ahead of the recently enacted EU AI Act, the atmosphere feels much like the years when companies were in a race to align their operations with the GDPR. Both regulations represent significant shifts in how businesses must operate in the EU, particularly when it comes to managing data and emerging technologies.

The GDPR and the AI Act share similarities in their regulatory style, both aiming to protect fundamental rights while promoting responsible innovation. They both establish detailed compliance obligations for businesses and impose strict penalties for non-compliance, including substantial fines. However, while the GDPR focuses on personal data protection, the AI Act targets the ethical and safe use of AI systems, introducing a broader scope for regulation beyond just personal data.

As businesses increasingly rely on AI systems that process vast amounts of personal data, it is crucial to understand how these two frameworks overlap. In this blog, we will explore the obligations under the GDPR and the AI Act, highlighting the key areas where these regulations converge, and what this means for companies operating in these fields.

 

Similarities between GDPR and AI Act

 

Regulatory Style:

Both are broad, EU-wide regulations that apply directly to businesses without needing implementation by individual member states.

They establish a framework that companies must follow, with clear requirements and standards.

Extraterritorial scope:

One notable similarity between the GDPR and the AI Act is their extraterritorial scope. Both regulations extend their reach beyond the borders of the European Union, applying to any entity that processes personal data or develops/deploys AI systems affecting individuals within the EU, regardless of where the company is based. This means that non-EU businesses must comply with these regulations if their activities involve EU residents, thereby ensuring that the rights of individuals are protected regardless of boundaries between different countries.

Risk-Based Approach:

Both regulations employ a risk-based approach. Under the GDPR, data controllers and processors are required to evaluate risks to individuals’ privacy. On the other hand, the AI Act takes this a step further by mandating a systematic risk classification of AI systems—categorizing them as unacceptable, high, limited, or minimal risk—each with specific compliance obligations.

Principles:

Both the GDPR and the AI Act are grounded in similar foundational principles that prioritize the protection of individuals and the responsible use of technology. Transparency is a key principle in both regulations, as organizations must clearly communicate how personal data is processed and how AI systems function, ensuring that users are informed about potential risks and the nature of decision-making processes. Accountability also plays a vital role, with the GDPR emphasizing that data controllers must demonstrate compliance, while the AI Act requires that AI developers, deployers, importers, and users maintain human oversight to ensure the responsible use of AI technologies. Additionally, the principle of accuracy is central to both frameworks. Under the GDPR, personal data must be kept up to date, while the AI Act necessitates that stakeholders understand their responsibility for the outputs of AI systems. This includes being accountable for inaccuracies, such as those that may arise when an AI “hallucinates,” operates as a black box or produces responses based on incorrect training data.

Penalties for Non-Compliance:

Both the GDPR and the AI Act impose strict fines for violations. The GDPR allows for fines up to 4% of global annual turnover or €20 million (whichever is higher). The AI Act mirrors this, proposing even higher fines depending on the severity of non-compliance (up to €35 million or 7% of annual global revenue).

 

Differences between GDPR and AI Act

 

Scope of Regulation:

GDPR is focused solely on the protection of personal data (information that can identify an individual).

AI Act regulates AI systems, regardless of whether they process personal data. Its scope includes how AI is developed, marketed, and deployed across various sectors.

Targeted Roles in Regulation:

The GDPR primarily targets data controllers and data processors—entities that collect, store, or process personal data. In contrast, the AI Act focuses on AI providers, deployers, importers, and distributors, regulating anyone involved in the development, deployment, or use of AI systems, even if personal data is not processed. This includes third parties that place AI systems on the EU market. Importantly, there is no direct correlation between these roles. An AI provider, for instance, may also be classified as a data controller or processor under the GDPR, depending on whether it determines the means and purposes for processing personal data. As a result, businesses may find themselves subject to different or overlapping roles and obligations, depending on their specific activities under each regulation.

Regulatory Enforcement:

GDPR enforcement is primarily carried out by data protection authorities (DPAs) in each member state.

The AI Act envisions a broader enforcement framework, with national competent authorities (at least one notifying authority and at least one market surveillance authority), the AI Office, the European Artificial Intelligence Board (EAIB), and the Scientific Panel of Independent Experts playing roles in monitoring compliance.

 

Overlap Between the GDPR and AI Act

 

Not all data processing regulated by the GDPR will necessarily involve AI. Thus, not all controllers and processors of personal data will be in the scope of the AI Act. On the other hand, companies providing/deploying/importing AI systems will almost certainly be required to comply with both frameworks. While the GDPR applies to the processing of personal data, the AI Act regulates the development and use of AI systems, many of which involve the processing of personal data. This creates a significant area of intersection for businesses.

A key fact for businesses is that AI systems often process large amounts of both personal and non-personal data, making it technically difficult to distinguish between the two. As AI technologies rely heavily on datasets to function, it is highly likely that personal data will be involved at some stage in the AI system’s lifecycle, whether during training, deployment, or use. This increases the likelihood that the provisions of the GDPR will be triggered alongside the obligations under the AI Act.

As a result, most AI providers, deployers, and users will need to comply with both sets of regulations. They will need to manage not only the risks associated with AI (as required by the AI Act) but also ensure that any personal data involved is processed in accordance with the GDPR. This means businesses must be diligent in identifying whether personal data is being used and implement compliance mechanisms that cover both regulations.

In the following sections of the blog, we will outline some of the specific obligations that businesses covered by the AI Act must adhere to under the GDPR.

Choosing the Right Legal Basis:

AI providers and deployers that process personal data must secure one of the six legal bases under the GDPR to justify the processing. These include consent, legitimate interest, the performance of a contract, legal obligation, vital interests, or the public interest. Without a valid legal basis, processing personal data would be considered unlawful.

Defying Processing Purposes:

The purposes for which personal data is processed must be clearly and transparently defined. AI providers and deployers must ensure that each phase of their operations—such as training, developing, or deploying AI systems—has a specific, legitimate purpose that justifies the data processing. The legal basis may also differ depending on the phase, with certain stages potentially requiring a different justification for data use.

Technical and Organizational Measures:

Both the GDPR and AI Act require businesses to implement appropriate technical and organizational measures to ensure data security. These measures may include encryption, pseudonymization, and other safeguards to protect personal data. Given that most data is processed electronically, the security measures necessary to comply with both regulations are often similar, and a well-structured security framework can help meet the requirements of both.

Keeping Data Records:

AI providers and deployers must maintain records of their data processing activities, especially if they are processing personal data on a large scale or handling sensitive data. These records should detail the categories of data processed, the purposes, and the data retention periods. Keeping accurate and up-to-date records is essential to demonstrate compliance with GDPR requirements.

Data Processing Agreement (DPA):

If a third-party AI system is used or if service providers are involved in supporting the AI’s operations, AI providers or deployers may need to conclude a Data Processing Agreement (DPA). This agreement ensures that any external parties involved in data processing adhere to the GDPR’s data protection standards, making clear the responsibilities of each party.

Bias Prevention and Detection:

Both the GDPR and the AI Act require careful consideration to prevent bias and discrimination, especially when processing sensitive data such as racial or ethnic origin, political opinions, religious beliefs, biometric data, sex life, or sexual orientation. Under the GDPR, the processing of such data requires extra caution to avoid bias and protect individual rights. Meanwhile, the AI Act allows for the processing of special categories of personal data, subject to safeguards, for the purposes of bias monitoring, detection, and correction.

 

High-Risk Data Processing and High-Risk AI Systems

 

Both the GDPR and the AI Act take a risk-based approach to regulation, but the definitions of “high risk” differ between the two frameworks. High-risk data processing under the GDPR refers to any processing activity that is likely to pose a significant risk to individuals’ rights and freedoms. In contrast, the AI Act identifies high-risk AI systems based on their potential to significantly impact individuals or society, especially when these systems are used in critical sectors such as healthcare, law enforcement, or employment.

The distinction in how risk is categorized by these regulations leads to different sets of obligations. High-risk data processing under GDPR necessitates specific safeguards and assessments, while high-risk AI systems under the AI Act are subject to stringent requirements to ensure transparency, fairness, and safety.

 

Additional Obligations for High-Risk Processing and AI Systems

 

  • Conducting DPIA, FRIA, and Conformity Assessment

Under the GDPR, organizations must conduct a Data Protection Impact Assessment (DPIA) whenever their data processing activities are likely to result in high risks to the rights and freedoms of individuals. This is particularly relevant in cases involving sensitive data, large datasets, or automated decision-making processes. DPIAs help businesses identify and address potential risks, ensuring they put measures in place to mitigate those risks before they materialize. When deploying AI systems, a Data Protection Impact Assessment (DPIA) under the GDPR will almost always be necessary. This is because AI technologies typically involve large-scale data processing, often with sensitive personal data, and automated decision-making, all of which are considered high-risk activities under the GDPR.

In parallel, the AI Act requires a Fundamental Rights Impact Assessment (FRIA) for high-risk AI systems, focusing specifically on the system’s potential impact on fundamental rights, including privacy, equality, and non-discrimination. This assessment goes beyond data protection, examining how AI might affect individuals’ broader rights and freedoms. Additionally, a Conformity Assessment must be conducted to verify that the high-risk AI system complies with the AI Act’s stringent requirements regarding safety, transparency, and accountability. The Conformity Assessment covers technical evaluations of the system’s design, operation, and ability to be overseen by humans. Both the FRIA and Conformity Assessment serve as critical checks for AI systems, and where there is overlap with the DPIA, organizations can streamline these processes to avoid duplicating efforts while ensuring comprehensive risk management.

  • Automated Decision-Making Framework

Both the GDPR and the AI Act regulate automated decision-making systems, with a shared emphasis on human oversight. Under the GDPR, individuals must be informed when decisions are made purely by automated processes, especially if these decisions significantly affect them. Moreover, they have the right to request human intervention to review the decision, ensuring that the outcome is fair and their rights are protected. This is particularly relevant in contexts like credit scoring, hiring, or legal processes where automated decisions could have life-altering impacts.

The AI Act builds on this by requiring high-risk AI systems to be designed with built-in mechanisms for human oversight, ensuring that a natural person can effectively monitor and intervene when needed. This “human-oversight-by-design” approach means that AI providers must integrate tools that allow for human control throughout the AI system’s lifecycle, ensuring its decision-making process remains transparent and accountable.

In conclusion, the use of AI systems is increasingly drawing the attention of data protection authorities, who have already issued fines for non-compliance with GDPR in relation to AI use. For instance, the Italian DPA fined OpenAI’s ChatGPT for privacy violations, the French DPA penalized Clearview AI for unlawful data collection, and the Dutch DPA took action against the Dutch Tax and Customs Administration for biased automated decision-making. As both the GDPR and the AI Act impose strict requirements, businesses that fall under the scope of both regulations will need to ensure compliance with each. Failing to do so could result in double significant penalties, as both frameworks carry the potential for substantial fines based on the seriousness of non-compliance.

Similar Articles

Latest Articles

Ready to get started?

If you are not sure about what the first step should be, schedule consultations with one of our experts.

techlawafficiendo

privacywhisperer

cryptobuddy

evergreen

Not Just Another Newsletter

Forget boring legal analysis and theory. Receive timely updates,
news and reminders that can actually help your business.