The future of trust in artificial intelligence: responsibility, understandability and sustainability

Date posted
26 April 2022
Reading time
45 minutes

Developed by Kainos and Tortoise, with the help of more than 20 leading AI experts, this report explores the future of trust in artificial intelligence: a future in which trust is actively built between people and the systems they use. 



By arguing that trustworthy technology is sustainable technology, the report explores how the risks from misuse of artificial intelligence, much like the impact of humans on our planet’s climate, need to be addressed. 

The move towards environmental sustainability has seen professionalisation, standardisation and mechanisms for disclosure all to create confidence that the world economy can decarbonise. 

Artificial intelligence seems to be on a similar trajectory. 

Governments and corporations are considering ways of enforcing that the technologies are lawful, ethical and robust, so that their benefits are sustainable and can be realised in the long term. 



Tortoise and Kainos spoke to a range of experts from throughout the field of artificial intelligence; from executives and technicians, to researchers and government officials.

We present their insights in the form of three hypotheses about how the domain of trust in artificial intelligence is changing.

We argue that trustworthy technology is sustainable technology.



Tortoise is a slow newsroom. It doesn't focus on breaking news, but rather on what’s driving it.

Kainos combines transformative technology with courageous ambition to reimagine how organisations operate – and deliver change for good.

We've partnered to explore and share new perspectives on a topic that's shaping the future. 

“The need for actionable ethics standards is crucial in these times”

Nell Watson
Chair of IEEE’s ECPAIS Transparency Expert Focus Group

“Ethics is a cross organisational responsibility that has to be embedded across the whole of the organisation and not just within the technology people”

Ray Eitel-Porter
Global Lead for Responsible AI

“In the end it is a strategic move to be ethically sensitive and culturally aware. You can do this by ensuring your tech team is diverse to help naturally detect biases”

Dr. Emma Ruttkamp-Bloem
Chairperson for UNESCO’s Recommendation on AI Ethics

“The knowledge economy demands a high level of upskilling in order to keep pace with the widened range of consequences that the technologies are opening up”

Dr. David Leslie
Director of Ethics and Responsible Innovation Research
The Alan Turing Institute

“We have to get down from this top level and get down in the weeds to figure out how ethics and trust is going to happen - that’s part of the operationalisation, making it context specific”

Beena Ammanath
Executive Director
Deloitte AI Institute