Get startedGet started for free

Risk classification

1. Risk classification

Let’s focus our attention on how the AI Act classifies AI systems based on the level of risk. We’ll also add a few examples, to make these concepts more tangible.

2. Providers versus deployers

First, there is an important distinction we need to make between “providers” (those who build AI) and “deployers” (those who use AI). In the AI Act most obligations to make AI safer rest with providers, following a product safety regulation logic. However, since AI is not a traditional product, like a rubber duck or an elevator, risks can also arise from how companies or public authorities make decisions that impact people. In the AI Act, these companies or authorities using AI are called deployers and have their own obligations. For example, if Microsoft sells an AI solution to an insurance company, and the insurance company uses the AI to make decisions on granting credit to its clients, Microsoft is the provider and the insurance company is the deployer, each with their distinct set of obligations.

3. The pyramid of risk

With this distinction in mind, we’re ready to dive deeper into the famous pyramid of risk. Obligations in the AI Act are tiered according to the level of risk AI systems can pose, but it is important to note that the Act does not regulate the AI system themselves, but, rather, the providers and deployers of AI systems. To distinguish between obligations of providers and obligations of deployers, the AI Act relies on the concept of “intended use”. Provider obligations kick in depending on what the AI system is built for, and deployer obligations kick in depending on how the AI system is actually used. Now onto the levels of risk: unacceptable risk, high risk, limited risk, and no risk.

4. Unacceptable Risk

Let’s start from the top of the pyramid, with unacceptable risk. Some uses of AI, such as mass surveillance, social scoring, and predictive policing, are considered to pose unacceptable risks because they are in opposition to European and democratic values. These practices are outright banned.

5. High Risk

The AI Act focuses on uses of AI that can pose high risks to health, safety, or fundamental rights. These are either AI systems embedded in products (such as in medical devices or cars) or standalone AI systems used in high-impact areas such as employment, education, essential services, or law enforcement.

6. Limited and no risk

Some AI output can be deceptive to end users. Chatbots, deep fakes, and AI-generated content are the prime examples. These use cases are considered limited-risk. Finally, most AI use cases are benign and do not pose threats to health, safety, and fundamental rights. Other than the AI literacy obligations that apply across the board, the use of such systems is not regulated by the AI Act. For example, using AI to optimize watering crops, or to identify a bird in nature, is not likely to threaten health, safety, or fundamental rights in any meaningful way.

7. GPAI

Separately, providers of powerful general purpose AI models, who do not fit neatly on the pyramid or risk, also have risk-based obligations: basic transparency obligations for the vast majority, and additional risk-mitigation obligations for those models that can pose systemic risks.

8. Let's practice!

Before we move on, it is important to note that the enforcement of the rules is done through the imposition of tiered, significant, and dissuasive fines. These fines can reach 7% of annual global turnover or 35 million Euros, whichever is highest, for engaging in prohibited uses of AI.

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.