Get startedGet started for free

High-risk provider obligations

1. High-risk provider obligations

Welcome back. We’re still in the “high risk” universe, but now we will focus on the obligations for the providers, who build high-risk AI.

2. Concept of intended purpose

To ensure obligations are well balanced between those who build and those who use AI systems, the AI Act relies on the concept of “intended purpose”.

3. Concept of intended purpose

For example, let’s assume an AI system built for identifying animals, a minimal risk scenario,

4. Concept of intended purpose

is somehow used for pricing life insurance, a high-risk scenario. This is absolutely not recommended and likely won’t work well. The company who built the system in good faith for the purpose of identifying animals has no obligations under the AI Act, because it built and placed on the market a no-risk AI system. In this case, the obligations rest with the deployers, and we will cover this part later on.

5. Concept of intended purpose

In real life, providers often tailor their AI systems to their clients’ needs and intended purpose; that is, they are intentionally building and selling high-risk AI systems. That is when the AI Act obligations for providers kick in.

6. Conformity assessment

The set of obligations that providers have when putting a high-risk AI system on the European market is summed up under the term “conformity assessment”. In traditional product safety this conformity assessment has to be conducted by a certified third party. In the AI Act, providers can conduct it themselves as long as they document the process and can demonstrate compliance with the obligations it contains. So, what are the obligations?

7. Risk management and governance

First, the providers of high-risk AI systems need a risk management system to continuously identify, document, and mitigate risks, including testing. Then, because AI systems’ performance depends on the data they are trained on, providers need data governance measures that ensure that the data is as unbiased as possible and fit for purpose, taking into account the likely impact of the system on different categories of people it will be used on, such as the elderly, minorities, or disabled persons.

8. Documentation

Providers of high-risk AI systems need to create detailed technical documentation and enable record-keeping through automated logs to enable authorities to check compliance and ensure traceability in case anything goes wrong. As high-risk AI systems are likely to impact health, safety, and fundamental rights after their deployment, providers need to supply detailed information to deployers on the system’s features, proper use, capabilities, limitations, and potential misuse. To enable deployers to use the system safely, the providers must ensure effective human oversight, through instructions on supervision and inbuilt features such as a “safe stop” button. Finally, providers are obligated to implement technical and organizational measures for system accuracy, robustness, and cybersecurity, ensuring systems perform consistently throughout their lifecycle.

9. Other obligations

Outside the elements of the conformity assessment, providers of high-risk AI systems also need to create a quality management system, maintain the relevant compliance documentation and cooperate with authorities, and register the AI system in an EU database of high-risk AI systems before putting it on the market.

10. Ensuring compliance

Standardization is an important component of the compliance journey: following harmonized AI standards when building AI systems automatically leads to a presumption of conformity with the AI Act. However, standards are still in development. As is the case for other products, providers of high-risk AI systems will most likely opt for external assistance in fulfilling their obligations,

11. Let's practice!

either from certified compliance bodies or from the growing compliance market specifically designed for AI systems.