Get startedGet started for free

General purpose AI models

1. General purpose AI models

We’ll now take a detour from the pyramid of risk to cover a special category of obligations related to general purpose AI models.

2. Systems versus models

The AI Act makes an important distinction between “AI systems” and “AI models”. AI systems are the finite AI “products”, ready to be deployed on the market. They have specific inputs and expected output types. AI “models”, on the other hand, are the brains behind the systems. The best example is also the most famous: ChatGPT is an AI system, a chatbot with a user interface and concrete outputs based on user inputs. On the other hand, GPT-3.5, GPT-4, and GPT-4o are the successive models powering ChatGPT.

3. General purpose AI models

The AI Act concerns most AI systems, but it also has a distinct set of rules for powerful general purpose AI models. Such models can potentially power millions and millions of AI systems. Anything going wrong in the models could have what the AI Act calls “systemic risks”.

4. Size and scope

In the AI world, two very common concepts used to classify AI models are the number of model parameters, which approximates size, and the number of floating-point operations used in training to approximate performance. The AI Act uses baoth to determine the levels of obligations for those who build GPAI models.

5. Obligations regarding all GPAI models

In the AI Act, a model is generally assumed to be a general purpose AI model if it has above one billion parameters. For simplification, we can call these “low-risk GPAI models”. Providers of such models have several light-touch obligations to ensure the models are safe to use downstream and built in accordance with European law.

6. Obligations regarding all GPAI models

These obligations are to prepare and maintain technical documentation on the training and evaluation of the model and provide downstream integrators with enough documentation so they can safely integrate the model into their AI systems, respect the reservation of rights foreseen in the Copyright Directive, and summarize content used for model training.

7. Systemic risk

Some of these general purpose models can become so powerful that they can impact the whole European market. The potential impact of malfunctions with a widespread effect is what the EU calls “systemic risk”. In the AI Act, general purpose AI models trained on more than 10^25 floating-point operations are automatically classified as GPAI models with systemic risk. It is estimated that the current most powerful models, like GPT4, need 10 times more operations before they cross this threshold. This classification is intended to capture the next generation of models, which is expected to be even more powerful.

8. GPAI obligations

On top of the obligations for all GPAI model providers that we outlined before, companies building GPAI models with systemic risk have obligations on par with the potential impact. First, they need to immediately notify the European Commission when passing the training threshold. They also need to perform model evaluations to identify and mitigate systemic risks, report any serious incidents to the Commission, and ensure an adequate level of cybersecurity for the model.

9. Let's practice!

There will not be many models categorized as having a systemic risk. But those who do build GPAI models will be under heavy scrutiny from the European Commission to ensure that AI deployed in Europe remains human-centric and safe to use.