Eight recommendations on future regulation of highly capable AI models
In The King’s Speech earlier this year, it was announced how the government is planning to establish “appropriate regulation to place requirements on those working to develop the most powerful AI models”.
We have implemented 100+ AI solutions across Central Government, Defence, Health and Commercial sectors and many of these solutions leverage general-purpose AI models. As a strategic partner to organisations that develop highly capable general-purpose models, we are in favour of proportionate regulation and believe that regulation can support innovation when it is well-designed. We draw out 8 recommendations that government may consider in the design of its approach. Our recommendations are explicitly written from the lens of an AI services provider, who is a deployer or downstream developer (we will use the term ‘deployer’ throughout) and end-user of highly capable general-purpose AI models and systems.

Recommendation 1: Focus regulation on AI risks that developers can best mitigate to protect the AI value chain
Systemic AI safety risks deserve priority and special attention, further elaborated under Recommendation 5. However, we encourage government to consider designing regulation for AI model developers with coverage beyond only systemic AI safety risks. To stimulate AI adoption and innovation in the UK, it is important to recognise that some AI risks that could be part of both highly capable and less capable systems are most firmly under the influence of model developers. This should consider, for example, unequal model performance across groups, environmental harm, and potential copyright breaches where this involves data used for model training.
AI model developers have certain levers they can pull to mitigate risk at the development stage, that an AI deployer or end-user does not have. For example, unequal model performance across groups (e.g. the model working better for educated, native American English speakers than for less well-educated people, or those for whom English is a second language) can be mitigated by having representative training data. The developer can influence this directly, whereas the deployer has substantially less influence. AI model developers can opt not to include copyrighted information in their training data, whereas stopping the models from generating copyrighted information once the model is trained on such data is much more difficult for deployers to impact.
We therefore propose that when government addresses AI model developers, it considers the wider scope of AI risks that developers can best control in the AI value chain, as managing these can benefit the wider AI ecosystem.
Recommendation 2: Extend the scope of regulation to all non-application specific AI models
As an extension of recommendation 1, we propose government does not only consider regulating the development of the most highly capable models, but expands the scope of regulation to more generally address model development of:
- all general-purpose AI models and
- more narrow AI models and systems that are not sector-specific, such as anomaly detection models that are not strictly attached to a specific sector.
We will hereafter refer to both these types of AI models as non-application specific AI models.
Our recommendation to regulate this scope of AI models is motivated by the existing pro-innovation approach published by DSIT in February 2024, which is regulator and sector-led. Under this approach, the onus of following regulatory guidance is likely to fall on the deployer and end-user, as these are the actors in the AI value chain who implement systems in specific sectors and contexts that regulators overlook and are close to. Developers of non-application specific AI models are unlikely to be captured or feel addressed by UK regulatory guidance besides that of horizontal regulators like the ICO and EHRC.
Yet recognising that (as per recommendation 1), some AI risks are more easily addressed at development stage, the impact is that deployers, like Kainos, and our customers face a situation of regulatory requirements and responsibility to manage AI risk, such as risk of algorithmic bias, while not always being in the best position to address the risk. Regulation addressing developers directly, who would otherwise not be addressed by regulatory guidance, could help to rectify this imbalance.
Note that while our recommendations focus on AI models as ingredients that deployers use to build application-specific AI systems, the scope may be extended to AI systems to the extent they are equally unlikely to be captured under existing regulatory remit.

Recommendation 3: Impose baseline transparency requirements on non-application specific AI models
For this wider scope of AI models, we do not plea for extensive regulation, but for proportionate minimum requirements that would help bring confidence to AI adoption in a complex AI value chain. We propose that transparency around management of AI risks that developers are closest to could form a key enabler for this.
When we have dinner guests who have allergies, we are responsible for keeping them safe. We would not want to cook with a product without knowing the ingredients. Our data science team has, at times, opted to build (narrow) AI models from scratch, while pre-trained models were available, simply because there was insufficient information about the pre-trained models. To help deployers understand and assess the appropriateness of the models they are working with, transparency around pre-trained AI models is non-negotiable. While this principle holds for any pre-trained AI models, sector-specific future regulatory guidance is likely to already inform transparency for application-specific AI models, but this will not necessarily be the case for non-application specific AI models.
We therefore recommend that developers of non-application specific AI models are required to keep transparency documentation. Developers could make this documentation publicly available upon their discretion or be required to provide to deployers and end-users on request.
Model cards could form a good basis for transparency requirements as these are already widely adopted in the open-source community. We would, however, suggest that the template should be evolved to more explicitly include information that is key for deployers to understand the risk management of the model, such as information around how diversity of data was ensured, what bias mitigation techniques and guardrails were applied, information around copyrighted content and known or estimated energy and water consumption (environment impact), among others. Ideally, this updated template eventually becomes an international standard that the UK government helps to shape as a leader in AI assurance. It could also form a baseline for vertical regulators to adopt to enhance simplicity.
To clarify; transparency requirements should link to the activity that the AI actor has control over. As such, in the common scenario where developers fine-tune a baseline general-purpose model and release their model as a new model version, transparency requirements should include the fine-tuning dataset used, and the results of performance and fairness assessments. The aim is to sufficiently inform deployers.
Benefits of transparency requirements would go beyond fostering confidence in the AI value chain; they would also support independent evaluations and the scoping of red teaming exercises.
Recommendation 4: Mandate evaluation of safety risks for the most highly capable AI models
We understand that as it stands, the main purpose of the anticipated regulation is to protect society from systemic safety risks that could accompany the most powerful AI models. We agree that that this area deserves dedicated regulatory requirements. We suggest that the definition of ‘highly capable’ should be addressed holistically and be risk-based, taking into account the likely reach and impact of AI models. Given the rapid technological developments, fixed compute or other thresholds are unlikely to be future proof.
For those models that are in scope, we recommend that a series of model evaluations and red-teaming is mandated to assess for AI safety risks. The risks evaluated should at a minimum include the ability to effectively deceive and manipulate, self-proliferate, achieve other goals and tasks than those originally intended and the ability to be misused for catastrophic cybercrime, weapon development and/or physical harms. Such evaluations will allow to assess whether risks surpass certain thresholds, with the thresholds being defined by the AI Safety Institute. If so, risk mitigations should be put in place and robustness of controls demonstrated before releasing models to the UK market.
Recommendation 5: Ensure resilience by taking a principled and procedural approach to regulation
To ensure resilience in a fast-developing space where existing AI risks continue to develop and new risks will emerge, any regulation government adopts should define an approach that:
- is principles-based and positions procedures around transparency, risk evaluation, capability evaluation and risk control, as opposed to leaning on fixed thresholds
- can be amended over time, by allowing for adjustments and updates to implementation guidelines.

Recommendation 6: Create a general AI development regulator by elevating the status of the AI Safety Institute, Responsible Technology Adoption Unit or ICO
The general-purpose and non-sector specific nature of some AI models can mean that developers will not be as easily reached by regulators beyond horizontal regulators like the ICO and EHRC. It will therefore be important to define a body that has the remit to regulate developers of non-application specific AI models directly.
Logical parties to take the role of a regulator of non-application specific models - which we will call a general AI development regulator - could be:
- The Responsible Technology Adoption Unit (RTA) – to regulate less capable non-application specific AI models
- The AI Safety Institute (AISI) – to regulate highly capable AI models
- The Information Commissioner’s Office (ICO) – given its experience, horizontal nature, and the fact that general-purpose models frequently include PII data in their training dataset which could make the overlap with the ICO remit substantial
For simplicity’s sake, it is likely preferable to nominate one body, such as the ICO, to regulate all non-application specific AI models and to leverage the AISI as an operational body supporting evaluation of highly capable AI models. To avoid overlap and confusion, a clear remit will be necessary to regulate specific types of AI models and AI actors in the value chain. Such a new general AI development regulator should exist alongside existing regulators. The new regulator should be granted a comprehensive set of regulatory powers to effectively enforce the requirements.
Recommendation 7: To provide clarity and minimise regulatory burden, ensure international interoperability and leverage safety cases
We encourage close international collaboration and alignment, considering international interoperability in the design of any regulatory requirements. This should include alignment, as much as is sensible, with the transparency requirements on general-purpose AI model providers in the EU AI Act. International alignment will be key to keeping the regulatory burden proportionate, it will be practical for organisations operating internationally, and it will help keep the regulatory landscape navigable also for smaller enterprises. Leveraging international standards can support such harmonisation.
We have taken note of the recent announcement of AISI to advance safety cases and believe this is an exciting development. We believe safety cases have the potential to play a key role in providing clarity and guidance for deployers, including smaller players, around the circumstances under which they can safely and confidently deploy advanced pre-trained models.

Recommendation 8: Clarify potential liability of different AI actors through further case studies and open a dedicated consultation
A concern we frequently hear from clients is the lack of clarity around how liability would fall on different actors in the AI value chain in case of harm caused by an AI system. AI systems are becoming increasingly complex, with the advent of agentic AI and low code AI-enabled applications. Actors contributing only a piece to such complex AI systems, e.g. downstream distributors of low code, AI-enabled, applications that make part of wider AI system, are struggling to interpret their potential liability and compliance requirements.
While we recognise that true regulatory certainty will need to partially come from future case law, we found Case Study 1, presented in the February 2024 ‘A pro-innovation approach to AI regulation: government response’ paper, extremely helpful.
We suggest that government and DSIT considers issuing more case studies like it, including some covering more complex AI systems. Such case studies could form a powerful tool to help AI actors gain confidence in their reading of liability under the UK law. We recommend that government first provides more such hypothetical case studies and then a further consultation is opened to explore the need for further legislative interventions specifically around liability connected to AI-related harms.
We live in exciting times when the opportunity of AI for organisations and society alike is pertinent. However, the risks need to be brought under control enough to allow the AI ecosystem to flourish safely. We hope that our reflections, based on the reality both we and our customers experience on a day-to-day basis, can form food for thought for one of the more important debates of our time.