On the 6 February 2025, the EU Commission published guidelines on the definition of “Artificial Intelligence System” in the AI Act: Guidelines_on_the_definition_of_an_artificial_intelligence_system_established_by_AI_Act_VJp35Ve5WA4XFrpUSFj5ohfaBco_112455.pdf. The purpose of the guidelines is to “assist providers and other relevant persons in determining whether a software system constitutes an AI system to facilitate the effective application of the rules”.
Where a system falls outside the definition of an “AI system”, it will not directly be subject to the AI Act. The guidelines on the AI system definition are not binding, are designed to evolve over time and will be updated as necessary with evolution of practical experiences.
Article 3(1) of the AI Act defines an AI system as follows: “AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” This definition can be broken down into seven main elements which are explored further in the guidance and below:
- A machine-based system - in simple terms, AI systems must be computationally driven and based on machine operations - the term “machine” can be understood to include both the hardware and software components that enable the AI system to function.
- Autonomy - the level of autonomy is a necessary condition for qualification; systems that are “designed to operate with some reasonable degree of independence of actions fulfil the condition of autonomy in the definition of an AI system.” Where there is full manual human involvement in system operations and intervention, this would fall outside of the definition. Human involvement and human intervention can be either direct, e.g. through manual controls, or indirect, e.g. though automated systems-based controls which allow humans to delegate or supervise system operations.
- Adaptiveness - use of the term “may” in relation to this element of the definition indicates that a system may, but does not necessarily have to, possess adaptiveness or self-learning capabilities after deployment to constitute an AI system. Adaptiveness is therefore instructive, but not a decisive condition for determining whether the system falls in scope of the definition.
- AI system objectives - the objectives of the system may be explicitly or implicitly defined, with explicit objectives referring to clearly stated goals that are directly encoded by the developer into the system and implicit objectives referring to goals that are not explicitly stated but may be deduced from the behaviour or underlying assumptions of the system.
- Inferencing how to generate outputs using AI techniques - AI systems should be distinguished from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations. The inference process enables AI systems to produce predictions, content, recommendations or decisions that influence physical and virtual environments.
- Outputs that can influence physical or virtual environments - the guidance states that AI systems, including systems based on machine learning approaches and logic or knowledge-based systems, differ from non-AI systems in their ability to generate outputs like predictions, content, recommendation, and decisions in that they can handle complex relationships and patterns in data. AI systems can generally generate more nuanced outputs than other systems, for example, by leveraging patterns learned during training or by using expert-defined rules to make decisions, offering more sophisticated reasoning in structured environments.
- Interaction with the environment - according to the guidance, AI systems are not passive, but actively impact the environments in which they are deployed. Reference to “physical or virtual environments” indicates that the influence of an AI system may be both to tangible, physical objects (e.g. robot arm) and to virtual environments, including digital spaces, data flows, and software ecosystems.”
The guidelines specify systems that fall outside the scope of the AI system definition, being those that are based on the rules defined solely by natural persons to automatically execute operations. The guidelines also include examples of the below characteristics of such systems which clarify some of the terminology in practical application. Excluded systems include those:
- That are used to “improve mathematical optimisation or to accelerate and approximate traditional, well established optimisation methods, such as linear or logistic regression methods.”
- Basic data processing systems that follow “predefined, explicit instructions or operations… developed and deployed to execute tasks based on manual inputs or rules, without any “learning, reasoning or modelling” at any stage of the system lifecycle. They operate based on fixed human-programmed rules, without using AI techniques, such as machine learning or logic-based inference, to generate outputs.”
- Systems based on classical heuristics, which are problem-solving techniques that rely on experience-based methods to find approximate solutions efficiently because they apply predefined rules or algorithms to derive solutions.
- Simple prediction systems where performance can be achieved via a basic statistical learning rule, while technically may be classified as relying on machine learning approaches fall outside the scope of the AI system definition, due to its performance.
The definition of an AI system adopts a lifecycle-based perspective encompassing pre-deployment or “building” phase of the system and the post deployment or ‘use’ phase of the system. It should be noted that the seven elements described above are not required to be present continuously throughout both phases of that lifecycle and that the guidelines make clear that no automatic determination of an AI system should be made on this basis. Individual assessment of specific architecture and functionality of each system taking into account the seven elements should be considered in defining an AI system. The guidelines will certainly be welcomed by providers, deployers, importers and distributors of AI systems in determining whether a system constitutes an AI system within the meaning of the AI Act and in its resultant successful application and enforcement.