1. Classifying high-risk AI systems
Welcome back. We’re back in familiar territory with the “pyramid of risk”. We’ll dig deeper into the core of the AI Act, the classification of “high-risk AI” and the rules that apply to those who build these systems.
2. Background
The AI Act was built on a template for product safety regulation. In this type of regulation, those who build products that carry a certain level of risk, typically to health or safety, must go through a checklist of obligations to make their products safe. After fulfilling those obligations, they earn a marking of European conformity, also known as a CE marking, on their product. With this, they are allowed to put that product on the European market. Toys or personal protective equipment, for example, are governed by this class of regulations.
The AI Act works like this: using AI in some situations is considered high risk before the AI is allowed on the European market.
3. High-risk AI systems
To maintain proportionality and not over-regulate, the Commission designated AI applications that are very likely to pose threats to health, safety, and fundamental rights as high risk and built a discreet list of use cases:
Biometric AI systems, AI in education and employment, AI in essential services, AI in law enforcement or migration, and AI in critical infrastructure fundamentally impact people’s lives. Let’s dive deeper into all of them.
4. Biometrics
Using biometric AI systems, such as remote biometric identification, and emotion recognition, can impact individuals' privacy because of continuous surveillance and potential misuse of sensitive personal data.
5. Education and employment
The use of AI in education and employment, especially in sensitive decisions such as access to education, evaluating learning outcomes, supervising students or workers, and making decisions on hiring, firing, or monitoring performance, has the potential to discriminate and impact individuals' future life opportunities.
6. Essential services
Using AI to decide on essential services can critically affect individuals’ access to vital services such as social security benefits, healthcare, credit, life, or health insurance. If these systems fail or exhibit bias, there is a potential for social injustice.
7. Law enforcement
Using AI in law enforcement, migration, asylum, or border control, can significantly impact individuals' rights and freedom. Imagine an AI assessing risks posed by people seeking to enter the European Union. Biased or inaccurate decisions can lead to discrimination against certain groups.
8. Critical infrastructure
Finally, AI used in critical infrastructure, AI used to administer justice, or AI used to influence votes in elections or referenda are also high-risk because they threaten health, safety, and democratic processes.
9. Embedded AI
Recall the properties of AI that made the AI Act necessary: opacity, complexity, unpredictability, adaptability. AI embedded in existing products has the same properties. Many of these products are already regulated by other laws, but those laws did not take into account the properties specific to AI.
The AI Act identifies several products where the AI inside them must follow the same rules as standalone AI, especially when the AI is a safety feature. If this AI fails, it could pose a health or safety risk.
10. Embedded AI
Among the products automatically covered are medical devices, machinery, toys, radio equipment, recreational craft, and cableway installations.
For example, an on-device AI monitoring the condition of an implant would most likely be a safety component of that medical device and would be covered by the AI Act.
A number of more complex products are governed by European rules that need to be amended for the AI Act to apply. AI safety components in these products will be covered by the AI Act when those rules are brought up to date. These are products in civil aviation, automotive of any kind, marine equipment, and rail systems.
11. Let's practice!
The AI Act foresees many provisions to resolve the regulatory overlap, but these are outside the scope of this course.