This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Search our site

Viewpoints

| 2 minute read

The first requirements of the EU AI Act come into force

The ban on AI systems that pose an unacceptable risk to people’s safety, or systems that are intrusive or discriminatory under the EU AI Act (the Act) comes into force on 2 February 2025. In total, there are four levels of risk under the Act: unacceptable, in which case AI systems are prohibited; high risk, in which case AI systems are subject to extensive requirements; limited risk, which triggers only transparency requirements; and minimal risk, which does not trigger any obligations. 

Companies still have until mid-next year for a phased implementation to bring their policies in line with most of the provisions of the Act, but the ban on AI systems such as social scoring, profiling and facial recognition systems is upon us. The Act reaches beyond the borders of the EU; with application to any organisation or individual that supplies an AI system or GPAI model in the EU market regardless of where they are established or located and where any provider or deployer of an AI system is located or established outside the EU if the output of the AI system is to be used within the EU. 

The immediate focus therefore for 2 February for in-scope businesses (whether they are using, developing or marketing AI systems) seeking to avoid falling foul of the “prohibited” AI practices, should be those practices or systems that:

  • manipulate individuals' decisions subliminally or deceptively, with the objective, or effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, causing or reasonably likely to cause that person, another person or group of persons significant harm
  • exploit vulnerabilities like age, disability, or socio-economic status with the objective of materially distorting behaviour, causing or reasonably likely to cause significant harm
  • evaluate or classify individuals based on their social behaviour or personality characteristics, causing detrimental or unfavourable treatment
  • assess or predict the risk of an individual committing a criminal offence based on profiling their personality traits and characteristics (this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity)
  • create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage
  • infer emotions in workplaces or education institutions
  • categorise individuals based on their biometric data to infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation 
  • collect “real time” biometric information in publicly accessible spaces for the purposes of law enforcement (except in very limited circumstances)

The fines are hefty; with penalties for non-compliance reaching up to EUR 35m or 7% of the company's global annual turnover in the previous financial year with the majority of the Act’s provisions coming into force on 2 August 2026. European Commission guidance on banned AI systems is expected imminently.

Tags

artificial intelligence, commercial, technology