The UK’s framework for AI regulation
The Government’s strategy for implementing these principles is predicated on three core pillars:
- Leveraging existing regulatory authorities and frameworks
- Establishing a central function to facilitate effective risk monitoring and regulatory coordination
- Supporting innovation by piloting a multi-agency advisory service – the AI and Digital Hub
We will examine each of these in detail below.
1. Leveraging existing regulatory authorities and frameworks
As expected, the UK has no plans to introduce a new AI regulator to oversee the implementation of the framework. Instead, existing regulators, such as the Information Commissioner’s Office (ICO), Ofcom, and the FCA, have been asked to implement the five principles as they regulate and supervise AI within their respective domains. Regulators are expected to use a proportionate context-based approach, utilising existing laws and regulations.
Non-statutory approach and guidance to regulators
Despite their crucial role, neither the implementation of the principles by regulators, nor the necessity for them to collaborate, will be legally binding. The Government anticipates the need to introduce a legal duty on regulators to give due consideration to the framework’s principles. The decision to do so and the timing will be influenced by the regulators’ strategic AI plans and a planned review of regulatory powers and remits, to be conducted by the Central Function, as we explain below.
Regulators asked to publish their AI strategic plans by April 2024
Yet, there is no doubt that the Government expects individual regulators to take swift action to implement the AI regulatory framework in their respective domains. This expectation was communicated clearly through individual letters sent by DSIT to a selection of leading regulators4, requesting them to publish their strategic approach to AI regulation by 30th April 2024. DSIT clarified that the plans should include the following:
- An outline of the measures to align their AI plans with the framework’s principles
- An analysis of AI-related risks within their regulated sectors
- An explanation of their existing capacity to manage AI-related risks
- A plan of activities for the next 12 months, including additional AI guidance
The detailed nature of the request and the short deadline illustrate the urgency with which the Government expects the regulators to act. Indeed, the publication of these plans will be a significant development, providing the industry with valuable insights into regulators’ strategic direction and forthcoming guidance and initiatives.
Firms, in turn, must be prepared to respond to increasing regulatory scrutiny, implement guidelines and support information gathering exercises. For example, the Competition and Markets Authority has already initiated an initial review into the market for foundation models.
From a sector perspective, regulatory priorities may include addressing deep fakes under Ofcom’s illegal harms duties in the Online Safety Act, using FCA’s ‘skilled person’ powers in algorithmic auditing for financial services, or addressing fair customer outcomes through the Consumer Duty framework. From a broader consumer protection perspective, activities may involve leveraging the Consumer Rights Act to safeguard consumers who entered sales contracts for AI-based offerings, or existing product safety laws to ensure the safety of goods with integrated AI.
The plans will also highlight the areas where regulatory coordination is necessary, particularly regarding any supplementary joint regulatory guidance. Businesses would welcome increased confidence that following one regulator’s interpretation of the framework’s principles in specific use cases – e.g., in relation to transparency or fairness – will not conflict with another. In this context, regulatory collaboration, in particular via the Digital Regulation Cooperation Forum (DRCF),5 will also play an important role in facilitating regulatory alignment.
Additional funding to boost regulatory capabilities
Responding to a fast-moving technological landscape will continue to challenge regulators and firms. While AI is undoubtedly a key priority, publishing both their strategic plans (by the end of April) and AI guidance (within 12 months) may be challenging for some regulators. In recognition, DSIT announced £10 million to support regulators in putting in place the necessary tools, resources and expertise needed to adapt and respond to AI.
This is a significant investment. Yet, with 90 regulators6, determining how to prioritise its allocation and getting the best value for money will require careful consideration. The Government will work closely with regulators in the coming months to determine how to distribute the funds. Increasing the pooling of resources to develop shared critical tools and capabilities – such as algorithmic forensics or auditing – may also be an effective way to make the best use of limited resources.
2. Central function to support regulatory capabilities and coordination
Given the widespread impact of AI, individual regulators cannot fully address the opportunities and risks presented by AI technologies in isolation. To address this, the Government has set up a new Central Function within DSIT to monitor and evaluate AI risks, promote coherence and address regulatory gaps. Key deliverables will include an ongoing review of regulatory powers and remits, the development of a cross-economy AI risk register, and continued collaboration with existing regulatory forums such as the DRCF.
By spring 2024, the Central Function will formalise its coordination efforts and establish a steering committee consisting of representatives from the Government and key regulators. The success of the UK’s AI regulatory approach will rely heavily on the effectiveness of this new unit. Efficient coordination between regulators, consistent interpretation of principles, and clear supervisory expectations for firms are crucial to realise the framework’s benefits.
3. AI & Digital Hub
A pilot multi-regulator advisory service called the AI and Digital Hub will be launched in spring 2024 by the DRCF to help innovators to navigate multiple legal and regulatory obligations before product launch. The Hub will not only facilitate compliance, but also foster enhanced cooperation among regulators and will be open to firms meeting the eligibility criteria.
The DRCF has also confirmed that, as they address inquiries within the Hub, they will publish the outcomes as case studies. This approach is beneficial in addressing a common frustration raised by observers of similar innovation hub initiatives, where insights gained are not shared with a wider range of innovators, thus limiting the benefits to the wider community. Another crucial step will be to ensure that the insights filter through more quickly into the broader regulatory and supervisory strategies of regulators.
Future regulation of developers of highly capable GPAI systems
The UK’s approach to GPAI has undergone an important shift, moving away from a sole focus on voluntary measures towards recognising the need for future targeted regulatory interventions. Such interventions will be aimed at a select group of developers of the most powerful GPAI models and systems, and could cover transparency, data quality, risk management, accountability, corporate governance, and addressing harms from misuse or unfair bias.
However, the Government has not yet proposed any specific mandatory measures. Instead, it made clear that additional legislation would be introduced only if certain conditions are met. These include being confident that existing legal powers are insufficient to address GPAI risks and opportunities, and that the implementation of voluntary transparency and risk management measures is also ineffective.
This approach could lead to some immediate challenges for organisations. For example, DSIT itself highlights that a lack of appropriate allocation of liability across the value chain, other than on firms deploying AI, could lead to harmful outcomes and undermine adoption and innovation.
Additionally, a highly specialised, concentrated market of GPAI providers can make it challenging for smaller organisations deploying GPAIs to negotiate effective contractual protections. In the EU, GPAI providers will be accountable for specific provisions under the EU AI Act and will also need to provide technical documentation to downstream providers. However, in the UK, many measures will only be voluntary for the time being. To mitigate risks, individual organisations should implement additional safeguards. For example, they could require enhanced human review to manage the risk of discrimination if a GPAI provider limits or excludes liability for discrimination, and review contracts with clients to exclude or cap liability where appropriate and legally possible.
As for the other two types of advanced AI models, namely highly capable narrow AI and agentic AI, the Government will continue to gather evidence to determine the appropriate approach to address the risks and opportunities they pose.
International alignment and competitiveness
The UK recognises the importance of international cooperation in ensuring AI safety and effective governance, as demonstrated by the inaugural global AI Safety Summit held last year. The event drew participation from 28 leading AI countries, including the US and the EU, with 2024 summits scheduled to be hosted by South Korea and France.
However, while most jurisdictions aim for the same outcomes in AI safety and governance, there are differences in national approaches. For example, while the UK has been focusing on voluntary measures for GPAI, the EU has opted for a comprehensive and prescriptive legislative approach. The US is also introducing some mandatory reporting requirements, for example for foundation models that pose serious national or economic security risks.7 In the short term, these may serve as de facto standards or reference points for global firms, particularly due to their extraterritorial effects. For example, the EU AI Act will affect organisations marketing or deploying AI systems in the EU, regardless of their location.
While there are advantages to adopting a non-statutory approach, it is also important that it is implemented in such a way as to secure international recognition and, where applicable, secure equivalence with other leading national frameworks. To achieve this, it will be important for the UK Government to establish a clear and well-defined policy position on key AI issues as part of its framework implementation, endorsed by all regulators. This will help the UK to continue to promote responsible and safe AI development and maintain its reputation as a leader in the field.
Next steps – timeline of selective actions to implement the framework
The Government has unveiled a substantial array of actions and next steps as part of its roadmap for implementing the finalised AI framework as set out below:
Figure 3 – Roadmap of key next steps
link