Implementing ethical principles for AI in Defence

Date posted
3 February 2023
Reading time
4 minutes
Jane Fletcher
Experience Design Principal · Kainos

In our first blog in this three-part series, we examined the challenges associated with designing AI for government Defence clients. During that discussion, we touched on the need for extreme caution when proceeding with Defence AI projects – particularly when AI is used to deliver possible military effect, rather than only back-office process improvement. 

At Kainos, we’re delighted to be working with Dstl, part of the Defence AI Centre (DAIC), through our £7m contract for advanced rapid AI experimentation. 

The Ministry of Defence (MoD) has recently established ethical principles outlined in the government’s Defence AI Strategy – and our work, alongside our Dstl colleagues, has been instrumental in understanding how to implement those principles in delivery. 

Leading by example 

When designing for military scenarios where ethics and trust are of critical importance, it is not enough to only explore and mitigate potential harms if AI is used in the way it was intended and originally foreseen. We must also consider vulnerabilities; what would occur if the technology was to fail, what would happen if human-computer relationship goes wrong? What are the mitigations to avoid such a scenario? And beyond potential technical failures, what are the wider potential impacts of the decisions the AI is making and the ability of a human being able to override any decisions? 

It's also crucially important to explore what happens if AI is exploited in a different context than it was originally designed for. For example, AI that was originally used for training that is then used in an operational setting.  

To de-risk the impact of this happening, every piece of data science, wherever it goes, needs to bind to a piece of governance. This ensures we manage not just ethical and trust considerations, but the  governance for usage too. 

How we build an ethical approach into our delivery for Defence 

In partnership with our Dstl and DAIC colleagues, as part of the ‘one team approach’ using the best-in-class knowledge from both industry and the scientific community, we base our work on the Defence ethical principles mentioned above to move rapidly from ideation to production. 

We run harms workshops, run stretch scenarios, introduce debiasing strategies as appropriate, conduct ethics workshops and more. Although delivery supports a multitude of lenses, everything refers to those principles and documents the decisions made. 

Here’s a typical example of the multi-stage process we use to approach problem statements in Dstl: 

Pre-discovery: 

We start with a harms workshop to understand the potential harms and mitigations – at this stage where we’re just starting to look at the higher-level concept of what we would want to implement, the business problem and landscape. Working with specialists in Defence, we look at their legal and governance statement. We will ask questions to sense-check: does the concept have a lawful basis and is it appropriate? What are the possibilities of bias? This is a quick assessment to see if the concept is ready to go into a deeper discovery. 

Discovery: 

We seek to establish a diverse team, blended with Kainos / Defence individuals working as one and with key consideration of differing skillsets and world views. As we learn more about the concept, we start to understand more about data quality and the norms and values within the domain. We run another harms workshop leveraging this growing insight and verify that our understanding of the harms and mitigations align with the ethical principles summarised in the Defence AI strategy. In addition to the legal and governance statement, we start to look at responsibilities, qualification, trust and ethics statements. 

At this stage, we also do a deep dive into bias. A research team works closely with ethicists and data scientists to look at the data sources and identify any potential bias in the data quality. In addition, we bring user researchers to create primary user personas and to understand and plan for a debiasing strategy of the wider solution. 

Moreover, we investigate the potential risk of harm from misuse of the concept, mapping scenarios where there may be vulnerabilities. 

Alpha and proof of concept: 

In the alpha/proof of concept phase, the work is about exploring ideas and settling on a potential solution. We hold another harms workshop where we understand more about the potential ideas and can therefore explore other mitigations. This is the phase where we can rapidly ideate solutions and designs, understand business benefits and user needs and agree a potential solution. As the technology matures, we continue to learn and refine. 

Beta stage: 

Then we go through a beta stage, involving more refining and testing. In total, the project spans four critical stages in six months. At the end of the beta stage, we will run an ethics assurance workshop led by a highly experienced Dstl ethicist who can dig into all aspects of the concept delivery and challenge any assumptions and recommend areas for further research. 

We validate any models we built, scrutinise them for bias, and support the drafting of data explanation and trust statements, ensuring that these statements can also be understood by a less technical audience. More generally,  work is carried out so that anyone, including non-technical specialists, will be able to understand how the data science works alongside governance and oversight. As part of the legal and governance statement at this stage, we support the assignment of responsibilities to key stakeholders. 

Throughout all this, we test iteratively against primary user personas, iterate all explanations and produce a data ethics report. Then we produce an information pack to ensure the whole process is thoroughly documented as the concept matures into operational delivery. 

This is the level of detail and consideration required to adhere to the published ethical principles for Defence AI projects.  

AI for the future 

This approach to mitigating AI risks provides a blueprint to help reduce risk in other sectors, for example, healthcare, where outputs can be similarly consequential for those exposed to the technology; positioning Defence to become a leader in the deployment of AI in an ethical and trustworthy manner.

About the author

Jane Fletcher
Experience Design Principal · Kainos