After getting familiar with the basic principles of the AI Act, the next most important task to understand is which practices linked to the use of AI are allowed, and which are considered risky leading to them being prohibited. The importance of understanding and properly preventing unallowed practices is particularly emphasized by the fact that Article 5 of the AI Act, governing the prohibited practices, is entering into force significantly earlier than the other parts of the Act – precisely, on February 5, 2025.
With the said date approaching, it is about time to go into a thorough analysis of prohibited practices in order to better understand the ways in which AI should not be used.
All prohibitions refer to AI systems using certain methods or aiming for certain goals that are not considered justified, safe, or lawful. In particular, such systems lead to:
1. Manipulative or deceptive methods affecting one’s decision-making
The first prohibition under the AI Act aims to prevent negative impacts on a person through deceptive, manipulative, or other subliminal techniques implemented in an AI system. The described techniques are those which cause an individual to make a decision that he or she otherwise would not make, and which harms or is likely to harm that particular individual, another person or a group.
In particular, this prohibition forbids:
- placing on the market,
- putting into service, or
- using any such AI system.
The described techniques are unlawful because they lead to impairing a person’s ability to make an informed decision about a certain issue. Simply put, this limitation includes methods that affect the consciousness of a person and as a result, trigger certain kinds of action or inaction that are not accordant with the person’s free will.
A frequent example of this kind of practice includes using an AI system on an online shopping platform for the purpose of collecting information about the user’s habits and past purchases. Based on the collected data, the AI system can learn how to predict particular moments when the user is most likely to act impulsively and buy something they do not, in fact, need. Such AI systems would be deemed manipulative if they would use such moments to display limited-time offers to users such as “Only 5 minutes left for 50% discount!”, since such message may encourage the user to act on information that is, most likely, not true.
2. Distorting a person’s behaviour by exploiting their vulnerabilities
EU subjects must not launch, provide, or use AI systems that use a person’s specific vulnerability to influence such person’s behaviour in a way that might cause harm to such person or another individual.
For example, the fact that a person or a group:
- has some type of disability,
- is of a certain age, or
- is in a specific social or economic situation,
represents a vulnerability that must not be exploited, according to this prohibition.
In practice, the prohibited practice is usually embodied as certain marketing practices particularly aiming at some sensitive category of people with the intention to motivate them to do something due to their vulnerable situation. For instance, let’s imagine that an AI system used for analyzing users’ browsing habits identifies that a particular user is in a bad financial situation, because they googled job advertisements or used keywords such as “budget-friendly” or “low cost”. If such an AI system starts serving such users with marketing content advertising illegal jobs providing “easy money” or risky behaviour such as gambling, such a system would not be considered allowed under the AI Act.
3. Social scoring that may result in unfavourable outcomes
Let’s move on to the next prohibited practice: using artificial intelligence systems for classifying people based on their behavior or characteristics, resulting in individuals being negatively socially scored.
What does this mean in practice?
“Negative” social scoring leads to one or both of the following consequences:
- unfavorable treatment of a person or a group in social context, in such a manner that is not related to the initial purpose of collecting data about such individual, or
- unjustifiably or disproportionately unfavorable treatment of a person or a group.
The reason why the described use of AI systems is prohibited is that it may easily lead to violation of some of the fundamental human rights and values such as dignity, justice, or equality.
A great example of business practice that would be incompliant with AI Act criteria is implementing an AI tool in a payment system and analyzing the individuals’ financial habits (such as the tendency to spend or save money). Based on such collected data, AI tools can assign a social score to individuals involved in the analysis and treat them unfavorably by, for instance, considering them an unsuitable candidate for a bank loan. The described use of AI would lead to discrimination and may significantly compromise some basic human rights and freedoms, which is why it is deemed forbidden under the AI Act.
4. Profiling-based assessments of a person’s criminal tendencies
Another practice that must not be delegated to AI is assessing whether an individual shows a potential for committing crimes, based on their personality. Such a type of AI system would use machine learning models to analyze a person’s social interactions, personality type, or other characteristics to predict their possible actions without connecting them to existing criminal evidence.
How would such a system work? Well, it could, for instance, use an individual’s personality analysis results to conclude if such a person may be a potential criminal in the future. Such a conclusion would, evidently, not be based on verifiable and objective facts but on a subjective criterion, and might lead to inaccuracies and discrimination, as well as to privacy concerns.
However, this prohibition is not absolute, since it does not apply to the AI systems that are only used to support human assessment of an individual’s involvement in offences. The reason for making this exception lays in the fact that such assessment is assumed to already be established on the facts that are objective, provable, and directly linked to an actual criminal activity.
That way, the use of AI in enforcement activities does not have to be forbidden, instead it can be redirected and limited to the appropriate extent.
5. Facial recognition based on untargeted scraping
It is not acceptable to use AI systems for massively collecting images and recordings of individuals in order to create databases, if such data is not collected for such purpose in the first place. In other words – untargeted data scraping for the purpose of establishing facial recognition systems is not allowed under the AI Act.
Data scraping has for some time now been a highly controversial issue from the legal point of view, and the adoption of the AI Act does not seem to silence the debates on that topic, but the opposite.
This prohibition applies equally to both publicly available data and data that is limitedly available, such as the data collected using CCTV systems, which are usually used for privacy purposes such as surveilling certain areas (company’s premises or area surrounding the business objects).
Main reason for this limitation is to restrict unauthorized data processing, and consequently ensure individuals’ right to privacy and personal data protection. Basically, this prohibition simultaneously covers the objects of both the AI Act and GDPR.
6. Inferring emotions of an employee or a student
AI technologies that aim to draw a conclusion about employees’ or students’ emotions will also not be allowed, according to the provisions of the AI Act. In particular, this restriction applies to technologies that are based on artificial intelligence and have the objective of analyzing human emotions and drawing conclusions from the collected information.
For example, AI tools visually recording employees and recognizing their emotions during conference calls may fall under this prohibited category.
Why are such tools banned under the AI Act? Simply, because they can lead to biases or other types of unjust treatment of individuals, or more serious human rights breaches. An additional justification for specifically defining this practice unallowed in scenarios including employment or education is the fact that work and studying environments may often open the door to discrimination due to the unequal positions of the involved parties (employer, i.e., educational institutions have a certain power over employees and students).
However, not all AI emotion detection tools are looked at unfavorably – some are exempt from this prohibition. AI Act stipulates that the use of systems based on artificial intelligence for medical or safety reasons is allowed, bearing in mind that such tools, when used outside of “risky” environments like in the workplace or in school, may be beneficial and used for positive purposes.
7. Potentially discriminatory uses of biometric categorization systems
Let’s continue to the next prohibited practice under AI Act: using AI systems for categorization of biometric data based on unlawful grounds.
This limitation forbids AI systems using biometric data with the aim to categorize individuals based on some of potentially discriminatory ground, such as:
- race,
- political opinions,
- trade union membership,
- religious beliefs,
- philosophical beliefs,
- sex life or sexual orientation.
For instance, this prohibition forbids the use of voice or image recognition systems which may infer a person’s race, skin colour, or ethnicity.
On the other hand, lawful use of biometric systems, such as those used by enforcement authorities, does not fall under prohibition. Therefore, the use of the collected biometric data for legally governed and controlled purposes, such as the identification of an offender or undertaking other legal actions to suppress crimes, remains allowed.
8. Real-time remote biometric identification systems for law enforcement purposes
However, not all enforcement purposes are permitted under the AI Act. The final prohibition applies to the use of real-time remote biometric identification systems by the authorities. The reasoning behind this ban is to prevent a breach of human rights and freedoms and ensure that the privacy of citizens is not violated.
A great example of a non-allowed behaviour is using face recognition technology at mass events (such as large musical concerts or political demonstrations) for identifying individuals, and afterwards comparing the collected data to criminal databases or using it for other legally ungrounded purposes.
Exceptionally, there are some justified cases where such activities will be compliant with the AI Act:
- during the ongoing targeted search for victims of human trafficking or abduction,
- prevention of a highly likely safety risk, such as a terrorist act,
- localization of a person suspected of committing a serious crime for which a prison sentence of a minimum of four years may be imposed.
Yet, even if these systems may be used under some of the exceptions, there are still requirements prescribed by the AI Act that need to be fulfilled to bypass the risks of real-time remote biometric identification systems. Namely, the AI Act states that these systems may be used only for confirming the identity of a person, under the following conditions:
- the severity, likelihood, and extent of harm that might occur if the system is not used must be taken into account,
- fundamental rights impact assessment must be conducted by the authorities, to analyze the consequences of the use of the system,
- the system must be registered in the EU database of high-risk AI systems,
- the use of such a system must be approved by the national authority of the state in which the system is used, which approval shall be based on the objective evidence or clear indications that the use of the system is necessary for, and proportionate to, achieving a goal justified under the AI Act,
- both the market surveillance authority and data protection authority of the respective EU state must be notified of the system. Once notified, those authorities must submit annual reports to the European Commission on the use of these systems.
If an EU Member State decides to generally allow the use of this type of system in public areas, they can enact a national regulation in that regard, provided that such a law is compliant with the described restrictions from the AI Act.
It can be noticed that all the prohibitions under the AI Act are defined widely, in order to limit the possible threats arising out of AI use. Hopefully, these restrictions will ensure the balance between the beneficial use of artificial intelligence, on the one hand, and the prevention of negative effects of AI, on the other. However, in the months following the day when this provision of the AI Act takes effect, we expect to see whether the stipulated restrictions are an efficient protection mechanism or a stumbling block for the AI industry. In any case, it is essential for every company to thoroughly analyze all AI systems before February 2025.