Chapter one: Introduction and foreword

The future of trust in artificial intelligence: responsibility, understandability and sustainability.


This report proposes three hypotheses about the future of trust in artificial intelligence: a future in which trust is actively built between people and the systems they use.

It offers insights that everyone can understand and apply, from developers to designers, executives to employees and decision-makers to decision-takers.

By arguing that trustworthy technology is responsible, understandable and sustainable technology, this report explores how the risks from misuse of artificial intelligence, much like the impact of humans on our planet’s climate, need to be addressed.

Special thanks to…

  • Adam Leon Smith, CTO, Dragonfly
  • Air Cdre David Rowland, Senior Responsible Officer, Ministry of Defence’s AI Centre
  • Anna Fellander, Founder,
  • Anouk Ruhaak, Mozilla Fellow
  • Beena Ammanath, Executive Director, Deloitte Global AI Institute
  • Dama Sathianathan, Partner, Bethnal Green Ventures
  • Dr David Leslie, Director of Ethics and Responsible Innovation Research,The Alan Turing Institute
  • Dr Emma Ruttkamp-Bloem, Chairperson, Ad hoc Expert Group UNESCO Recommendation on AI Ethics, Professor of Philosophy, University of Pretoria, AI Ethics Lead, Centre for AI Research
  • Dr Frens Kroeger, Professor, Centre of Trust, Peace and Social Relations, Coventry University
  • Dr Gemma Galdón-Clavell, Founder, Eticas Consulting
  • Gerlinde Weger, AI Ethics Consultant, IEEE
  • Dr Jenn Wortman Vaughan, Senior Principal Researcher, Microsoft Research
  • Kostis Manolitzas, Head of Data Science, Sky Media
  • Lisa Talia Moretti, Digital Sociologist Consultant and Associate Lecturer at Goldsmiths University
  • Liz Grennan, Global Co-Leader of Digital Trust, McKinsey & Company
  • Minesh Tanna, Managing Partner, Simmons and Simmons
  • Nell Watson, Chair of ECPAIS Transparency Expert Focus Group, IEEE
  • Olivia Gambelin, Co-founder and CEO, Ethical Intelligence
  • Ray Eitel-Porter, Global Lead for Responsible AI, Accenture

…for contributing to this report.


Foreword: The complexity of trust in artificial intelligence

Alexandra Mousavizadeh, Director of Tortoise Intelligence, and Tom Gray, CTO and Director of Innovation, Kainos.

Many companies are only just beginning to adopt artificial intelligence. For others, the journey is well under way. Big or small, advanced or nascent, companies at all stages of maturity face many of the same challenges, and one of those is understanding trust.

Trust is complex, especially in advanced technological systems. Artificial intelligence is currently used in a vast range of applications: from keeping spam emails out of our inboxes to sequencing human genome data, with much more in between.

When it comes to the development and use of artificial intelligence there are many factors at play. Trust in a system varies depending on the data used, the conclusions or predictions reached and the sensitivity of the system to bias and other influences. The system’s history also plays a role. Crucially, artificial intelligence both influences and is influenced by the people around it.

It’s a product of their decisions and biases, but also has the potential to shape those decisions and biases. All people, whether they are developers, end users, data engineers, regulators or bystanders, have a complex relationship with the artificial intelligence systems they encounter. This is a unique characteristic for a technology.

The complex, multi-layered structure of its decision-making and the specialist knowledge involved in its engineering mean that artificial intelligence is not transparent or easy to understand. If you put most of us in the cockpit of a commercial aircraft, we’d be just as useless at controlling it as we would be at overseeing a machine-learning model. Yet many of us feel we must ask very different questions about the trustworthiness of artificial intelligence than we would about other critical systems on which our daily lives depend.

Trust in technology is, in fact, interpersonal trust. Trust that people will make the right decisions. But building trust between people is not simple either. As artificial intelligence takes on more control within business, these relationships of trust will be put under new pressures. Pressures that will test the connections of trust between different parts of the wider AI ecosystem, between regulators and business, developers and managers, decision-makers and decision-takers.

We hope that this report, and the insights from its contributors, will help to advance the conversation about trustworthy artificial intelligence and give readers a sense of what the coming years might bring. We also hope it will give businesses, both large and small, a reference point for their strategies in building “human-centric” and long-lasting AI systems.