Get startedGet started for free

The EU AI Act

1. The EU AI Act

Hi everybody. My name is Dan Nechita and, over the last 5 years, I worked in the European Parliament drafting and negotiating the European Union Artificial Intelligence Act. I will be your guide to one of the most impactful regulations of the digital realm to date. We have a lot of ground to cover in a short amount of time, so let’s get started.

2. What is the EU AI Act?

So what is the EU AI Act? The AI Act is a risk-based horizontal Regulation laying down harmonized rules on Artificial Intelligence. Let’s unpack that. First, a Regulation is the most powerful type of European law, because It applies automatically to all 27 Member States as soon as it enters into force. The AI Act entered into force on August 2 2024. Second, a horizontal approach means that the Act applies across all industries, including in already regulated areas, like medical devices. Finally, a risk-based approach means that obligations and responsibilities are distributed according to risk: the higher the risk, the more obligations.

3. The ecosystem of trust

Why do we need the AI Act? The AI Act is meant to make AI that can pose threats to health, safety, or fundamental rights, safer. Safer AI means less AI-caused accidents and therefore more trust in the technology. More trust means people and companies will be more confident in adopting and using AI. This, in turn, means AI will bring more benefits to our societies and economies.

4. Why build rules?

But the EU has many rules protecting health, safety, and fundamental rights. Why build rules specifically for AI? The answer rests with a number of AI properties that make current rules hard to enforce or ineffective: opacity (we cannot know exactly how a result was produced),

5. Why build rules?

complexity (we cannot understand how the AI functions),

6. Why build rules?

unpredictability (we cannot be sure of the AI’s outputs)

7. Why build rules?

and adaptability (we cannot be confident the AI results will remain consistent after it is deployed and continues to learn).

8. Who's affected?

So how does the AI Act impact you? The AI Act applies to all actors in the AI value chain, from those who build AI to those who import, distribute, or deploy AI solutions, whether private entities or public institutions, inside the European Union and outside of it, if the output of the system is used in the EU.

9. Risk pyramid

The obligations for all these actors are distributed according to a pyramid of risk that will become all too familiar in this course: unacceptable risk (AI use cases that are outright prohibited),

10. Risk pyramid

high-risk (those AI systems most likely to pose threats to health, safety, and fundamental rights),

11. Risk pyramid

limited-risk (with limited obligations),

12. Risk pyramid

and no risk (the vast majority of AI, which is left unregulated).

13. AI literacy

Irrespective of risk, all providers and deployers of AI systems have the obligation to ensure an adequate level of AI literacy for their staff operating AI, taking into account how the AI is used and who it impacts. This obligation is the only obligation that covers all AI, even outside the pyramid of risk. Some exemptions do exist for specific applications, but exploring them is beyond the scope of this course. When in doubt, it is always better to assume the rules apply.

14. Let's practice!

The EU AI Act serves as a guideline for an AI-driven world. This course will explore the Act in detail, showing you how compliance is not only good practice but also good business in the European market.