Get startedGet started for free

High-risk deployer obligations

1. High-risk deployer obligations

Let’s wrap up with the obligations of deployers of high-risk AI systems. Deployers are all businesses, public authorities, or legal entities who use AI systems in their interactions with natural persons in the European market.

2. Obligations for deployers

Recall we have previously discussed the concept of intended purpose. Providers of high-risk AI systems will have to go to extended lengths to make their systems safe to use for a given purpose. But it is deployers who use these systems, whether in hospitals, education institutions, or police stations. What happens if they misuse them? It is for this reason that deployers have their share of obligations, further ensuring that AI systems do not pose threats to health, safety, or fundamental rights.

3. So what are those obligations?

So what are those obligations? First, deployers need to take appropriate measures to ensure that the system is used according to the instructions. This includes maintaining the automated logs under their control and monitoring the system for any malfunction. In case of a serious incident, they must inform both the provider of the high-risk AI system and the authorities.

4. Notification required

Deployers who use the AI system in their interaction with workers, for example for assigning tasks or for monitoring performance, need to inform workers that they will be subjected to the use of an AI system. Furthermore, all deployers who use high-risk AI systems to make decisions related to natural persons need to inform the natural persons that they are subject to the use of a high-risk AI system. This might be the case, for example, when using a high-risk AI system to decide on the premium of someone’s life insurance or if someone is admitted to an educational institution.

5. Public authorities

Deployers who are public authorities and deployers who provide essential services, including credit and life or health insurance, need to conduct a fundamental rights impact assessment. The fundamental rights impact assessment, or FRIA, is similar to the data protection impact assessment, DPIA, under the GDPR. Its goal is to ensure that when used in specific contexts, such as in marginalized communities, the AI system does not breach fundamental rights. For increased public scrutiny, deployers who are public authorities and use high-risk AI systems, such as police stations or migration authorities, need to register the use of the AI system in the EU database.

6. Deployers can become providers!

Finally, it is critical to note that the distinction between providers and deployers is not always clear-cut. A deployer who operates a substantial modification on an AI system, changing its intended purpose, becomes a provider and has all the provider obligations. This is also true when modifying a GPAI or a limited-risk AI system into a high-risk AI. For example, if a company builds an application powered by GPT-4 with the intended purpose of making decisions on hiring and firing people, that company becomes a provider of a high-risk AI system, even if the engine powering the AI system is built by OpenAI.

7. Looking back

With this, let’s take one last look at the pyramid of risk and go through the main takeaways from this course. Unacceptable risk and prohibited practices: don’t do it. High-risk use-cases: providers need to undergo a conformity assessment before placing the AI system on the European market; deployers also need to make sure the system is used in a safe manner. Limited-risk AI: Chatbots, deep fakes, and AI-generated content need to be labeled as such when interacting with natural persons. No-risk: all providers and deployers of AI systems need to ensure an adequate level of AI literacy within their organizations. Separately, General purpose AI models have rules based on their size and performance, with most obligations resting with the next generation of models that could pose systemic risks.

8. Thank you!

Thank you for joining, and, whether you are building AI, deploying AI, or simply learning about AI, I hope this course was useful to you in understanding how the European Union is shaping the evolution of AI in a safe, human-centric direction.

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.