This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Search our site

Viewpoints

| 2 minute read

Government publishes Code of Practice for Cyber Security of AI

On 31 January 2025, the Department for Science, Innovation and Technology (DSIT) introduced a comprehensive Code of Practice for the Cyber Security of AI - GOV.UK. This initiative aims to establish baseline security principles to protect AI systems from growing cyber threats. The voluntary nature of the code is in line with the broader UK government approach to AI; to foster innovation yet maintain security outside of restrictive legislation. The code is designed to address unique risks associated with AI, such as data poisoning, model inversion, and membership inference attacks. By focusing on the entire AI lifecycle, from secure design and development to deployment and end-of-life management, the code provides a framework for organisations to secure their AI systems effectively. The Implementation Guide for the AI Cyber Security Code of Practice.pdf published alongside the code and developed following an extensive review of software and AI standards and frameworks published by other governments and regulators, gives organisations clear and practical guidance on how to implement the principles and sets out risk mitigation controls for each cyber security measure. 

The code outlines the following 13 principles that span the AI lifecycle:

Principle 1: Raise awareness of AI security threats and risks – essentially, educate staff on AI security risks and update training as new threats emerge;

Principle 2: Design your AI system for security as well as functionality and performance - assess security risks before developing an AI system and document mitigation strategies;

Principle 3: Evaluate the threats and manage the risks to your AI system – regular evaluation of AI specific attacks like data poisoning and management thereof;

Principle 4: Enable human responsibility for AI systems - ensure AI decisions are explainable and users understand their responsibilities;

Principle 5: Identify, track and protect your assets - maintain an inventory of AI components and secure sensitive data;

Principle 6: Secure your infrastructure - APIs and data pipelines to protect and manage risks of AI systems;

Principle 7: Secure your supply chain - developers, system operators and data custodians should ensure that security requirements and processes flow down to their suppliers;

Principle 8: Document your data, models and prompts - developers should document and maintain a clear audit trail of their model design and post-deployment maintenance plans (and make these details available to system operators and data custodians);

Principle 9: Conduct appropriate testing and evaluation - ensure that it is not possible to reverse engineer non-public aspects of the model or training data;

Principle 10: Communication and processes associated with End-users and Affected Entities - convey where how their data will be used, accessed, and stored;

Principle 11: Maintain regular security updates, patches and mitigations - provide security updates and patches and notify system operators of the updates;

Principle 12: Monitor your system’s behaviour - system operators should log system and user actions to support security activities, and along with developers, analyse and monitor logs to check that AI system behaviour is as expected; and 

Principle 13: Ensure proper data and model disposal - securely dispose of training data or models after transferring or sharing ownership.

By providing a comprehensive framework and practical guidance, the code aims to secure AI systems throughout their lifecycle, benefiting both organisations and the broader digital economy - with the intention that it will help to create a new global standard for secure AI through the European Telecommunications Standards Institute.

Tags

artificial intelligence, articles, commercial