The AI landscape is rapidly evolving, which presents ongoing challenges for the effective regulation of AI. As governments around the world grapple with these challenges, it is increasingly important for organisations to prepare for regulatory compliance.
In this article, we provide a brief overview of the status of AI regulation in the United Kingdom, the European Union and the United States.
United Kingdom
In March 2023, the UK government published a white paper which set out a principles-led regulatory framework for AI in the UK (the Framework).
Rather than appointing an AI-specific regulator, the Framework proposed that existing regulators should extend their responsibilities to incorporate AI. Regulators such as the Competition and Markets Authority, the Financial Conduct Authority, the Office of Communications and the Office of Gas and Electricity Markets have since published their approaches to regulating AI.
Although there is not an AI-specific regulator in the UK, a number of organisations are responsible for overseeing the UK’s regulation of AI, including the Department for Science, Innovation and Technology, the AI Security Institute and the Digital Regulation Co-Operation Forum, amongst others.
Whilst the UK government has favoured an innovation-led approach to the regulation of AI to date, a Private Members’ Bill which aims to establish a central AI authority was re-introduced into the House of Lords on 4 March 2025 (the Artificial Intelligence (Regulation) Bill (the Bill)). The Bill was previously introduced into the House of Lords in November 2023, but failed following the announcement of the general election on 22 May 2024 and the prorogation of Parliament. If enacted, the Bill would align the UK’s regulation of AI more closely with the approach adopted in the EU, such as by creating a central regulatory authority, as is explored below.
European Union
In the EU, the AI Act ((EU) 2024/1689) (adopted by the European Council on 21 May 2024) (the AI Act) sets out a prescriptive legislative framework for the regulation of AI, focusing on systemic risks and human rights.
The majority of the AI Act will apply from 2 August 2026. In the meantime, the European Commission is promoting the AI Pact, which encourages organisations to plan for the implementation of measures in line with the AI Act.
The AI Act regulates AI across all sectors and is more centralised than the Framework in the UK. The AI Act relies on a network of new and established regulators, whilst also establishing a central European AI Board and national competent authorities for AI in each Member State.
Alongside the AI Act, there are a number of sector-specific regulations and directives that apply in the EU. For example, in the machinery sector, Regulation (EU) 2023/1230 on Machinery and Repealing Directive 2006/42/EC and Directive 73/361/EEC imposes requirements to ensure safety issues arising from the use of driverless vehicles and robots that collaborate with humans, amongst others. In the area of product liability, Directive (EU) 2024/2853 on Liability for Defective Products and Repealing Directive 85/374/EEC modernises the product liability regime so that AI systems and digital services are in scope. Organisations must therefore ensure compliance with both the AI Act and any provisions that are specific to their sector.
United States
Earlier this year, President Trump signed Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence (the Order). The Order revoked certain existing AI policies and directives that were considered barriers to American innovations in AI and aims to solidify the US’ position as a global leader in AI.
More recently, on 22 May 2025, the House of Representatives passed the One Big Beautiful Bill Act (the Bill Act). The Bill Act runs to over 1,000 pages and includes provisions relating to AI, including the allocation of significant investment into AI.
As currently drafted, the Bill Act proposes a moratorium which would restrict state-level AI regulations, as well as the enforcement of existing AI regulations, for a period of ten years. As there is no oversight of AI regulation at a federal level in the US, a number of states have proposed and/or enacted AI regulations that would be affected by the Bill Act.
Examples can be seen in California, Colorado and Utah, with Utah recently enacting laws that regulate the use of generative AI in business, including restrictions on the use of chatbots to engage in conversations that are similar to therapy, though the efficacy of such rules over the next decade is now in question.
A global approach to AI
A number of countries have previously discussed joint efforts to manage the development of AI, as well as opportunities to collaborate.
At the Hiroshima Summit in 2023, the G7 developed the Guiding Principles for Organizations Developing Advanced AI Systems and the Code of Conduct for Organizations Developing Advanced AI Systems, which aim to promote the safe and secure use of AI worldwide. The UK also hosted a summit on AI safety in 2023, which culminated in the Bletchley Declaration. Under the Bletchley Declaration, the attendees of the summit confirmed their recognition of the opportunities and the risks that AI presents, as well as their intention to collaborate on research and risk-based policies moving forwards.
Despite a shared focus on principles such as accountability and transparency, the UK, EU and US’ differing approaches to regulation of AI make a harmonised approach unlikely, meaning that businesses will need to adapt to an assortment of overlapping and sometimes conflicting rules, depending on the nature and location of their activities. If you need tailored advice on risk management and regulatory compliance, we can help.