Artificial intelligence brings tangible benefits, increases productivity, enables new models of innovation, and facilitates everyday processes. However, progress is accompanied by serious risks: bias, disinformation, manipulation, identity theft, fraud, lack of transparency, dependency, and dehumanization. Within this spectrum, deepfake technology stands out for the intensity and scope of harm it can cause, requiring responsible and systematic regulation.
According to the Global Risks Report 2024 of the World Economic Forum, disinformation and misinformation have been identified as the most significant short-term global risks, with explicit emphasis on AI-generated content (including deepfakes). The pronounced consequences include erosion of trust in institutions, manipulation of public opinion, and undermining of democratic processes.
Rapid Rise and Technical Development
What was just a few years ago a niche technology, technically demanding and requiring vast resources, is today widely accessible. Based on a single quality photo, it is possible, free of charge and in about twenty minutes, to generate a convincing one-minute deepfake video. Studies record a sharp increase in the volume of deepfake content online between 2019 and 2023, while EUROPOL estimates that by 2026 around 90% of digital content could be AI-generated.
Two factors drive this trend: continuous advancement of Generative Adversarial Networks (GANs) and the mass availability of easy-to-use tools that make creation and distribution accessible even to unskilled users. In a GAN setup, there are two neural networks: the generator and the discriminator. The generator attempts to produce synthetic data convincing enough to “fool” the discriminator, while the discriminator is trained to recognize forgeries. This “competitive” dynamic explains why the boundary between authentic and synthetic is constantly shifting, complicating both regulation and detection.
How Fake Media Harms Individuals and Society
a) Personal harm. Most publicly available deepfake content is pornographic in nature, disproportionately affecting women. Such material is often created without consent, leading to severe reputational and psychological consequences for victims; public figures are not exempt, as confirmed by recent viral cases resulting from poor moderation on media platforms.
b) Financial fraud. A case in Hong Kong was reported in which an employee, believing they were on a video call with managers (who were in fact deepfakes), authorized a transfer of USD 25 million.
c) Political manipulation. In one U.S. survey, 77% of respondents said they had seen AI-generated deepfake content, and 36% stated that this content significantly influenced their voting decisions, an alarming indicator of damage to trust in media and democratic processes themselves.
Why a Ban Is Not the Solution
Although the risks are high, an outright ban is unsustainable, as this technology has legitimate and socially beneficial uses. In public health, one speaker can credibly deliver messages in multiple languages; in education, realistic simulations and interactive materials are being developed; overcoming language barriers makes content accessible to broader audiences; other important applications include reconstructions of historical figures, satire, and artistic creations. The goal is therefore to enforce responsible use and precise regulation, separating abuses that threaten rights and security from legitimate uses that serve the public interest.
Technical Solutions: Detection, Watermarking, C2PA
Deepfake detectors estimate the likelihood that material is AI-generated, but they often lag behind generators because they rely on outdated databases and provide only probability-based assessments (e.g., “75% likely”), which are difficult to incorporate directly into consistent content removal policies.
Watermarking holds promise, but its development is not yet fully standardized, making implementation inconsistent and raising questions of accuracy and resistance to manipulation.
The C2PA (Coalition for Content Provenance and Authenticity) initiative advocates standards for content provenance and integrity through cryptographic signatures and metadata about the source and subsequent modifications. However, the effectiveness of these standards depends on broad, coordinated adoption by platforms, media, and providers, without which their reach will be limited.
Deepfakes and GDPR: Gaps in Regulation
The use of deepfake technology almost always involves processing personal data, often including special categories (e.g., facial and voice biometrics, data implying individuals’ health status, political beliefs, or sex life). Even when the content is entirely “fabricated,” linking it to a specific person activates the safeguards of the General Data Protection Regulation (GDPR).
Consent is rare in practice: material is often collected without the subject’s knowledge (e.g., through automated scraping), leading to frequent violations of the strict conditions for processing special categories of data. In these cases, GDPR’s fundamental principles still apply, including purpose limitation, legal basis, and the right to rectification, an oddly paradoxical right in the context of inherently false content, yet formally applicable.
Conclusion: GDPR formally applies to deepfakes, but it was not designed to keep pace with technological evolution, resulting in inconsistent and sometimes counterintuitive interpretations. This highlights the need for strong AI governance mechanisms and disciplined data management.
Deepfakes and the DSA: Transparency Gains, but Gaps Remain
The Digital Services Act (DSA) introduces two important mechanisms for deepfakes and manipulative content. The first is the obligation to label content as generated or altered (e.g., watermarking or other marks). The second is the notice-and-action regime: platforms must assess reports promptly and remove illegal/harmful content when justified.
However, the scope of the DSA is limited: it applies primarily to intermediary services (digital platforms, online marketplaces), while tools for creating deepfake content, such as generative models and specialized apps, remain outside its direct reach. Private communications, including encrypted messengers, are also excluded. As a result, regulatory gaps persist, especially in the area of creation and distribution tools.
The EU AI Act: Classification as “Limited Risk” and Transparency Obligations
The EU AI Act does not ban deepfake technology but classifies it as a limited-risk AI system and imposes strict transparency obligations:
- Providers (including providers of general-purpose AI models – GPAI) must ensure that system outputs are machine-readable and labeled as artificially generated or modified (e.g., watermark or similar technical solution).
- Deployers (system users) are obliged to explicitly inform end users that the content they are interacting with is a deepfake.
A key open question remains: is transparency alone sufficient?
The example of deepfake pornography shows that even with a clear label, the victim suffers negative consequences: emotional, reputational, and psychological. The same applies to political manipulation, financial fraud, or incitement to violence: in such cases, labeling the origin of content cannot be enough. Thus, it is legitimate to ask whether the “limited risk” classification adequately addresses the real negative consequences of deepfakes and the complexity of this phenomenon.
Conclusion
Deepfakes are not merely a technical achievement but an emerging social threat that undermines the basic human ability to rely on one’s senses. In doing so, they erode the foundations of trust in media, institutions, and interpersonal relationships.
The EU AI Act is an important step toward responsible AI governance, but its transparency-focused approach will likely not, on its own, mitigate the multiple harms that deepfakes generate. As technological development continues to outpace regulatory evolution, a comprehensive, stricter, and thoughtfully designed regulatory framework is needed, one that provides both protection from risks and room for legitimate, beneficial applications.












