All A B D G H P R S T
A
AI system
Also: artificial intelligence system
"A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
Official Journal of the European Union, 12 July 2024.
"A machine-based system that, for a given set of objectives, generates outputs such as predictions, content, recommendations, or decisions that influence physical or virtual environments. AI systems are designed to operate with varying levels of autonomy and may exhibit adaptiveness after deployment."
Adopted by OECD Council, 8 November 2023 (definition revision); full Recommendation updated 3 May 2024.
"An engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy."
Adapted from OECD Recommendation on AI:2019 and ISO/IEC 22989:2022. NIST AI 100-1, p. 3.
Practitioner note The three definitions converge on machine-based inference generating outputs that influence environments. The EU AI Act version adds "adaptiveness after deployment" and "explicit or implicit objectives" as explicit criteria - both carry direct classification consequences under the Act.
AI literacy
"Skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations provided for in this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause."
Article 4 of the EU AI Act requires providers and deployers to take measures to ensure sufficient AI literacy among their staff.
Practitioner note Article 4 creates an obligation - not merely a goal - for providers and deployers to take measures to ensure sufficient AI literacy among staff who operate or oversee AI systems. This definition is the only one currently carrying legal weight in EU law.
B
Biometric categorisation system
"An AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data unless it is ancillary to another commercial service and strictly necessary for objective technical reasons."
Biometric categorisation systems assigned to high-risk use cases are listed in Annex III. Systems categorising persons by sensitive attributes - race, political opinions, trade union membership, religious beliefs, health or sexual orientation - are subject to specific prohibitions under Article 5.
Practitioner note The carve-out for systems "ancillary to another commercial service and strictly necessary for objective technical reasons" is narrow. Practitioners deploying systems that incidentally categorise users by protected characteristics need to assess whether the carve-out applies before assuming the system falls outside scope.
D
Deployer
Also: operator (earlier drafts)
"Any natural or legal person, public authority, agency or other body using an AI system under its own authority except where the AI system is used in the course of a personal non-professional activity."
Deployers of high-risk AI systems carry specific obligations under Chapter III, Section 3, including conducting fundamental rights impact assessments in certain cases (Article 27).
Practitioner note Most in-house legal, privacy and compliance teams procuring AI tools from third-party vendors are deployers, not providers, under the EU AI Act. The distinction determines which obligations apply and which can be contractually allocated to the provider.
G
General-purpose AI model
Also: GPAI model; foundation model (informal)
"An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."
Title VIII of the EU AI Act (Articles 51-56) sets out obligations specific to GPAI model providers, including transparency, copyright compliance, and - for models with systemic risk - adversarial testing and incident reporting.
Practitioner note The research and prototyping carve-out applies only before a model is placed on the market. Once deployed - even as a component in a downstream product - the carve-out no longer applies and GPAI obligations attach to the upstream model provider.
General-purpose AI system
"An AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems."
Distinct from a GPAI model. The system is the deployed product; the model is the underlying component. Obligations may attach at both levels depending on the actor's role.
H
High-risk AI system
"AI systems referred to in Annex III shall be considered to be high-risk. AI systems referred to in Annex III shall not be considered to be high-risk where they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making."
Annex III lists eight areas including biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice. The self-assessment exception in Article 6(3) requires documented justification.
Practitioner note Classification as high-risk triggers the full Chapter III obligation set: risk management system, data governance, technical documentation, logging, transparency, human oversight, accuracy and robustness requirements. The Article 6(3) self-assessment carve-out is narrow and requires a documented rationale retained in the technical file.
Human oversight
"High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use."
Article 14 further requires that natural persons assigned oversight responsibilities have the competence, authority and means to intervene, override, or halt the system. Oversight measures must be built into the system by the provider and implemented by the deployer.
"The AI RMF refers to human oversight as a component of accountable and transparent AI systems. Organizational roles and responsibilities for AI risk management are documented and human oversight of AI systems is established, including policies and procedures for when humans should be involved in AI decisions and when AI should defer to humans."
NIST AI RMF 1.0, GOVERN 6.2. Human oversight in the NIST framework is framed as an organisational governance and policy obligation, not solely a technical design requirement.
P
Provider
"Any natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places that system or model on the market or puts the system into service under its own name or trademark, whether for payment or free of charge."
Providers carry the primary obligation set under the EU AI Act, including conformity assessment, technical documentation, and registration in the EU database for high-risk systems. Article 25 sets out when a deployer becomes a provider.
Practitioner note Article 25 is the provision most in-house teams overlook: a deployer becomes a provider - and assumes provider obligations - when they place an AI system on the market under their own name, make a substantial modification to a high-risk system, or change the intended purpose of a system in a way that brings it into high-risk scope.
R
Risk
"The combination of the probability of an occurrence of harm and the severity of that harm."
This two-factor definition - probability × severity - is the operative definition for the EU AI Act's risk management requirements under Article 9.
"The AI RMF characterises AI risk as the combination of the probability that an AI system will cause harm and the severity of that harm, taking into account the breadth of harm and whether the harm is reversible."
NIST AI RMF 1.0, p. 8. The NIST formulation adds breadth and reversibility as considerations, making it a four-factor rather than two-factor assessment.
Practitioner note The EU Act's definition is simpler but the practical difference is material: the NIST formulation requires organisations to assess how many people are affected (breadth) and whether the harm can be undone (reversibility). Practitioners using the NIST AI RMF alongside the EU AI Act will need to map between the two frameworks explicitly.
S
Serious incident
"An incident or malfunctioning of an AI system that directly or indirectly leads to any of the following: (a) the death of a person, or serious harm to a person's health; (b) a serious and irreversible disruption of the management or operation of critical infrastructure; (c) infringement of obligations under Union law intended to protect fundamental rights; (d) serious damage to property or the environment."
Providers of high-risk AI systems must report serious incidents to the national market surveillance authority of the Member State where the incident occurred, without undue delay and in any event within 15 days (Article 73). Deployers must notify providers immediately upon becoming aware of a serious incident.
Practitioner note Article 73 sets a tiered timeline, not a flat deadline. Death or serious harm to health: 10 days from provider awareness. All other serious incidents and serious malfunctioning: 15 days from awareness. The clock runs from when the provider becomes aware, not when the incident occurred - making the deployer-to-provider notification chain a contractual dependency that must be in place before deployment. Article 73(3) requires deployers to notify the provider "immediately" on becoming aware of an incident; if that chain is undefined, the provider's reporting window erodes before they know there is one. Note also that Article 73 applies to high-risk AI systems only. GPAI models with systemic risk have separate incident reporting obligations under Article 73(2).
Systemic risk
"A risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or society as a whole, that can propagate at scale across the value chain."
Article 51 provides that a GPAI model is presumed to have systemic risk where it is trained using a total computing power of more than 10^25 FLOPs. The AI Office may designate additional models as having systemic risk on the basis of other criteria.
Practitioner note The 10^25 FLOP threshold is a rebuttable presumption, not a hard boundary. Deployers integrating GPAI models should verify whether the underlying model has been designated as systemic risk: this affects the obligations that apply to the upstream provider and, indirectly, the due diligence required of the deployer.
T
Training data
"Data used for training an AI system through fitting its learnable parameters."
Article 10 requires providers of high-risk AI systems to implement data governance and management practices covering training, validation and testing datasets. Training data must meet quality criteria including being relevant, representative, and free of errors to the extent possible.
"Data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process in order, inter alia, to prevent underfitting or overfitting."
The EU AI Act separately defines training data (Article 3(29)), validation data (Article 3(30)), and testing data (Article 3(32)) - a distintion with direct consequences for data governance documentation under Article 10.
Practitioner note The Act's tripartite distinction between training, validation and testing data is not merely technical - Article 10(2) requires providers to document governance practices for each category separately. Practitioners reviewing AI governance documentation should check that all three are addressed.
Transparency
"High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately."
Article 13 further requires that high-risk AI systems be accompanied by instructions for use including the intended purpose, level of accuracy, and known limitations. Transparency obligations for GPAI model providers are set out separately in Article 53.
"Accountable and transparent AI systems provide information about how the system was designed, developed, and deployed to relevant parties, and are clear about the inherent uncertainty in AI outputs. Transparency enables accountability and supports people in making informed decisions about whether and how to use an AI system."
NIST AI RMF 1.0 (2023). Transparency is one of seven characteristics of trustworthy AI in the NIST framework, alongside validity and reliability, safety, security and resilience, explainability and interpretability, privacy, and fairness.
Practitioner note The EU Act frames transparency as a design obligation on providers and an operational obligation on deployers. The NIST framework frames it as an organisational accountability obligation that spans the entire AI lifecycle. Practitioners need both framings: the EU Act tells you what must be disclosed; the NIST framework tells you how to govern disclosure internally.