top of page

AI Act explained: Demystifying GPAI, AI models, and AI systems

Updated: 6 days ago

The Council of the European Union has granted its final approval for the Artificial Intelligence (AI) Act, marking a significant milestone in the global regulation of AI technology. As the first comprehensive legislation of its kind, the AI Act establishes clear rules governing the development, deployment, and use of AI systems within the European Union. This groundbreaking legislation introduces critical terms such as General Purpose AI (GPAI) models and systems, which align with industry-standard nomenclature like "foundation models." This article aims to demystify these key definitions, explore their implications, and provide a clear understanding of the regulatory landscape shaped by the AI Act.

From foundation models to GPAI: the AI act's new standards explained

The Council of the European Union has granted its final approval for the Artificial Intelligence (AI) Act, marking a significant milestone in the global regulation of AI technology. The Act is the first of its kind, setting forth comprehensive rules governing the development, deployment, and use of AI systems within the European Union.

Prior to the AI Act, industry standard nomenclature referred to "Foundation models." This word is used interchangeably with GPAI models in the AI Act. Foundation models are large-scale AI models trained on massive amounts of data, frequently employing self-supervision, and can be utilised for many applications. Examples include models like GPT-4, Stable Diffusion, DALL-E2, Llama 2  and Mistral 7B.

The legal definition of GPAI model can be found in article 3 of the AI Act as:

an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are. placed on the market[1]

The two terms are often used interchangeably, with “foundation models” being considered a type of GPAI models. However, GPAI models seems to be a broader legal term while “foundation models” is more of an industry technical AI research term.

The AI Act also introduced the term GPAI system that stands for "general purpose AI system". It refers to an AI system that can be used in and adapted to a wide range of applications, irrespective of how it is placed on the market.

The notion of “GPAI system” is defined as “based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems”[2].  In essence, a GPAI model is the underlying machine learning model, such GPT-4 or DALL-E while GPAI system is an AI system that is based on a GPAI model. It refers to the application or end product that makes use of the GPAI model's capabilities for a specific or set of purposes.

In addition to the repeating taxonomy, the AI Act includes the concept of "AI systems."  GPAI models can function as standalone AI systems or as components of larger AI systems.  At this level, a GPAI system is a special form of AI system that appears to have good ability to confuse and make hallucinate language models when queried about its usefulness in this category.  

More notably, the AI Act does not define "AI model," which is then used to define other terminology like "GPAI model," adding to the complexity.

The GPAI models with systemic risk

A major difference must be considered as this new regulation adopts a risk-based approach for the classification of the GPAI models. Indeed, the ones with a systemic risk must be controlled more efficiently due to their nature.

They must comply with the first layer of obligations like every single GPAI model, a second layer of obligations also applies to them. The AI Act details additional information to be provided for GPAI models with systemic risk and the compliance and transparency obligations are reinforced for them too[3].

With such a difference of treatment between the “classic” GPAI model and the GPAI model with systemic risk, the categorization and its criteria are of huge importance. The AI Act defines the concept of “systemic risk” as:

a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain[4]

Moreover, the following conditions must be met to be considered as GPAI with systemic risk[5] :

o    high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks[6]; or

o    based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII.

The indicators mentioned in Annex XIII for the assessment of the Commission include, for instance, the amount of computation used for training the model, the input and output modalities of the model, such as text to text (large language models) or text to image, and the potential high impact on the internal market due to its reach.

Regarding this last criterion of significance of the impact, it shall be presumed when at least 10,000 business users established in the EU are registered.

It is interesting to mention that, as an example, GPT-4, DALL-E and Midjourney 5.1 could be currently considered as GPAI models with systemic risk since they most probably already reach this threshold. Indeed, ChatGPT reached 100 million monthly active users in 2023[7], while DALL-E began with invitations sent to 1 million people[8], and Midjourney users ‘amount is counted in million too[9].

With such a low threshold, the AI Act will have numerous consequences, and the first one will be to classify the most popular of them as GPAI with systemic risk.

The process of classifying a GPAI as a model with systemic risk will be driven as follows:


GPAI model systemic risk assessment infographic
GPAI model systemic risk assessment

Standard GPAI models

The immediate regulatory consequences of a general purpose AI (GPAI) model being classified as a "standard" GPAI model under the EU AI Act are:

Transparency obligations
  • Drawing up and maintaining the model's technical documentation

  • Making information and documentation available to downstream AI system providers who intend to integrate the GPAI model, to enable their compliance;

  • Putting in place a policy to comply with EU copyright law;

  • Publishing a sufficiently detailed summary about the content used for training the GPAI model, using a template provided by the AI Office[10].

Lighter regime for open-source models

GPAI models released under a free and open-source license, whose parameters are publicly available, only need to comply with the copyright policy and training data summary obligations. They are exempt from the technical documentation requirements, unless they pose systemic risks[11].

It is interesting to note that a GPAI model alone should not be considered a high-risk AI system under the AI Act because models are regulated independently from systems. However, an AI system constructed on top of a normal GPAI model could still be considered high-risk.

Next steps

Regarding the deadline, the AI Act will be published in the Official Journal of the EU in the next days. That regulation will go into effect 20 days after its publication. As a result, the responsibilities applicable to GPAI as described above will become effective[12]:

o    24 months after the entry into force for GPAI models existing before that date (approx. June 2026);

o    12 months after the entry into force for new GPAI models launched after that date (approx. June 2025).

Jurisconsul's AI and regulatory compliance services

Our firm provides regulatory compliance with European law in relation with AI and new technologies to go with our client through the implementation or update of policies and legal documentation.

[1] Article 3, point 63, of the AI Act.

[2] Article 3 point 66, of the AI Act.

[3] Article 55 of the AI Act.

[4] Article 3, point 65, of the AI Act.

[5] Article 51 of the AI Act.

[6] Specifically, a GPAI model is presumed to have high impact capabilities when the cumulative amount of computation used for its training, measured in floating point operations, is greater than 10^25 (10 to the power of 25), Article 51 of the AI Act. For context, GPT-4 is estimated to have used around 1-10 x 10^24 FLOPs for training.

[10] Article 53 of the AI Act.

[11] Article 53, point 2, of the AI Act.

[12] Article 111 of the AI Act.



bottom of page