This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Search our site


| 1 minute read

ICO updates guidance on use of artificial intelligence

The Information Commissioner's Office (ICO) has published an updated version of its guidance on artificial intelligence (AI) and data protection in response to industry demand for the ICO to clarify its stance on fairness in connection with the use of AI. 

The guidance “supports the UK government’s vision of a pro-innovation approach to AI regulation" and also delivers on one of the ICO’s key ICO25 commitments – to help organisations adopt new technologies while protecting people and vulnerable groups. 

The ICO has also said that the guidance will need to be continually updated to keep up with the “fast pace of technological developments” and it has confirmed it will be supporting the implementation of the UK government’s upcoming White Paper on AI Regulation.

Some of the updates include: 

  • Restructuring the guidance around the data protection principles as this “makes editorial and operational sense” and will make updating the guidance more efficient.
  • Adding a new section covering what an organisation should assess when conducting a data protection impact assessment (DPIA) on its use of AI. For example, the assessment should now include evidence of consideration of “less risky alternatives” to achieve the same purpose and why those alternatives were not chosen.
  • Adding a new chapter on the transparency principle as it applies to AI. For example, the updated guidance confirms that, when personal data is collected directly from a data subject, the data subject should be notified if their personal data is going to be used to train an algorithm or AI model. 
  • Adding a new chapter on ensuring lawfulness in connection with AI, inferences and special category data. For example, the updated guidance highlights that AI might infer or guess details about a person which could constitute special category data. According to the updated guidance, the inference is likely to be special category data if an organisation can (or intends to) infer relevant information about an individual, or intends to treat someone differently on the basis of the inference.
  • Adding a new chapter on ensuring fairness when using AI. This chapter includes a non-exhaustive list of legal provisions to consider in relation to fairness when using AI, the difference between fairness, algorithmic fairness, bias and discrimination and processing personal data for bias mitigation.
  • The addition of a new annex on data protection fairness considerations in the AI lifecycle, from problem formulation to decommissioning. The annex also covers why fundamental aspects of building AI may have an impact on fairness, identifies the different sources of bias that can lead to unfairness and lists possible mitigation measures.
This update supports the UK government’s vision of a pro-innovation approach to AI regulation and more specifically its intention to embed considerations of fairness into AI.


technology, data protection, employment, artificial intelligence