Get startedGet started for free

Limited exceptions

1. Limited exceptions

In this video, we’ll begin taking a closer look at the pyramid of risk. We’ll cover obligations dealing with prohibited practices, limited risk use-cases, and no-risk AI.

2. Unacceptable risk

Let’s start with prohibited AI practices. The AI Act identifies a number of AI use cases that pose unacceptable risks to the health, safety, and fundamental rights of natural persons. These stand in direct contradiction with European and democratic values. Such practices are banned. Let’s take a look at what they are:

3. Prohibited practices

Using AI for subliminal manipulation, making people do something that they would not otherwise do, in such a way that is likely to cause them or others significant harm; Using AI to exploit the vulnerabilities of vulnerable groups such as children, elders, people with disabilities, or people from disadvantaged socioeconomic backgrounds; Using AI for social scoring on a mass scale, assigning everyone a score based on their behavior and treating them differently based on that score; Using AI and profiling for predictive policing -- in other words, predicting the risk of someone committing a criminal offense in the future;

4. Prohibited practices

Using AI to scrape the internet or CCTV footage indiscriminately to build databases of people’s faces; Using AI to infer the emotions of people in the workplace or education institutions; Using AI to infer people’s protected attributes such as race, political opinions, religious or philosophical beliefs, or sexual orientation from their biometric data; And using AI to engage in mass real-time surveillance in public spaces, a practice also known as real-time remote biometric identification.

5. Limited exceptions

There are some limited exceptions to all of these prohibitions. But, unless you’re trying to stop a serial killer and have judicial authorization, you’re better off not using AI like this. Engaging in these AI practices on the European market can lead to fines of up to 35 million Euro or 7% of annual global turnover.

6. Limited-risk

The AI Act focuses primarily on high-risk AI systems, which we will discuss in detail later. For now, let’s move to limited-risk systems.

7. Limited-risk

Some AI systems, because of their nature, can be used to deceive or manipulate people. These systems, such as those used to generate images, are not risky in and of themselves; rather, their use carries a risk only in certain contexts and under certain conditions. For the builders of chatbots and generative AI, the AI Act foresees basic obligations that increase transparency towards the end user.

8. Limited-risk

Providers building AI systems meant to interact directly with end-users, such as chatbots, need to ensure end users are informed they are dealing with an AI, not a person. Jane from your favorite online store, in the little window on the left bottom of your screen, will have to inform you she’s an AI, not an actual sales representative.

9. Limited-risk

Providers of AI systems that generate synthetic audio, image, video, or text content need to ensure that the AI-generated content is labeled as such. Your next legally produced deep-fake or AI music album on Spotify will be labeled as AI-generated.

10. No risk and AI literacy

Finally, at the bottom of the pyramid, is no-risk AI. Most AI is in this category. Think of every possible industrial application of AI, or AI for climate, or AI for science. The AI Act does not apply to any use case that doesn't pose threats to health, safety, or fundamental rights. The only obligation, across the board, is for those who build and those who use AI to have an adequate level of AI literacy,

11. Let's practice!

specifically so that risks don’t arise from misuse.

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.