Chapter three: Hypothesis one - Responsibility is not only a role

The future of trust in artificial intelligence: responsibility, understandability and sustainability.

The ethicist is necessary but not sufficient to achieve trust throughout the artificial intelligence lifecycle

The rise of the ethicist for artificial intelligence 

There are practical challenges to designing, developing and using trustworthy artificial intelligence. No matter what stage a company is at, it is important to ask, who do these responsibilities fall on?

For Merve Hickok, Senior Research Director of the Centre for AI and Digital Policy at the University of Michigan, there are a number of titles that are taking on this responsibility. AI ethicists are popping up as a “chief AI ethics officer, chief trust officer, ethical AI lead, AI ethics and governance or trust and safety policy adviser”, to name just a few.

A sceptic might say that so many different titles, the lack of clarity of the role and the scramble to hire are signs of “ethics washing”. Companies want to market themselves as trustworthy, and hiring an ethicist may be a way to do so.

Lurking beneath this trend is an assumption that ethics and trust are positively correlated: the more ethical you are, the more trustworthy you can be. Moreover, in the evolving library of reports, frameworks and principles that set out to guide responsible business practice, these terms often lose their meaning.

We need to be cautious about such assumptions. The words trust and ethics are not wholly interchangeable. Thinkers and scholars have contemplated ethics and trust for hundreds of years, and it’s important to note that both concepts are fundamentally about the cultivation of good relations between humans.

Nell Watson, Chair of IEEE’s ECPAIS Transparency Expert Focus Group, who is one of those thinkers in the context of artificial intelligence, and has been an executive consultant on philosophical matters for Apple, suggests that ethics are the kinds of ideas and actions that tend to drive towards greater trust.

Companies that are seeking to build trust in the people, practices and platforms that shape their AI solutions make principles of ethical artificial intelligence part of their operations.

“Moving from a point of view on ethics to actual operationalisation of ethics involves a big journey of understanding to know where in the workflows you’d even weigh in” Liz Grennan, Global Co-Leader of Digital Trust at McKinsey & Company told us.

The role of the AI ethicist appears to carve out a space for someone with the skills and experience to define where to weigh in, and to start doing so.

Salesforce was an early mover, announcing in 2018 that it would hire an ethics chief, Paula Goldman, with the broad remit of “developing strategies to use technology in an ethical and humane way”.

More businesses have joined in, spearheaded in the last three years by large corporations. KPMG, for example, suggested in a 2019 blog post that the AI ethicist was one of the top five AI hires that companies needed to succeed, with that role taking “the critical responsibility of establishing AI frameworks that uphold company standards and codes of ethics”.

Trust also appears to be increasingly identified as a critical aspect of emerging technology jobs. Cognizant, for example, published a report in 2018 on “getting and staying employed over the next 10 years”, in which it listed the “chief trust officer” as one of the “21 jobs of the future'', alongside positions such as “cyber city analyst”, “personal data broker” and “AI business deveIopment manager”.

A few years into that future there is little evidence that this particular role has become the dominant one. But the employment of ethicists has gained momentum across a surprisingly diverse range of sectors and AI applications, with enterprise software companies such as Hypergiant, but also more traditional retail organisations such as Ikea, making room for this sort of position.

The emergence of the AI ethicist coincides with a wave of sentiment about responsible innovation that has swept the technology sector. In 2012 a Harvard Business Review piece hailed the data scientist as perhaps the “sexiest job of the 21st century”. Today that role is much more taken for granted, though no less integral to AI implementation strategies.

While data scientists were the hot topic in the last decade, roles that steward responsible and ethical AI practices are now capturing corporate attention. In a play of words on that 2012 HBR piece, David Ryan Polgar, of All Tech is Human, has suggested that while the title of sexiest job is up for grabs by the ethicist, there is still a need to “clearly define what these new roles entail”.

“We may want to take the AI ethicist role with a grain of salt. As I see it, there are a lot of people thinking about these issues who have been for quite a while. With GDPR, impact assessments, environmental and ethical anticipation… they have been alive in the world without networks of ethicists to run these things.”

David Leslie, Director of Ethics and Responsible
Innovation Research at the Alan Turing Institute
image

Experts in different industries are still assessing the importance of these roles. Nonetheless, there is a common set of characteristics and competencies that is giving shape to the function of the AI ethicist. Let’s look at them: 

AI ethicist 101 

Olivia Gambelin, the founder and CEO of Ethical Intelligence, pointed out that an AI ethicist is a person with a “critical-thinking skill set” who can identify and establish channels of cross-organisational communication. This definition chimes with a piece from David Ryan Polgar in which he outlines the ideal candidate: “someone comfortable working with both engineers and executives, typically has an advanced degree and is capable of cross-functional work regarding ethics, privacy, safety and security.”  

While the title of AI ethicist may end up being an umbrella term for these sub-disciplines, hiring someone who just ticks these functional boxes isn’t enough. As Will Griffin from Hypergiant points out:  

“Some tech companies with a lot of money and resources hire the best tech ethicists and send them all around the world to discuss subjects like algorithmic bias, fairness and transparency. But it’s fruitless, because these knowledgeable professionals don’t have the buy-in to actually change the products at their own organisations. This means that all the investment put into the ethics department doesn’t generate value.”

Will Griffin
Hypergiant

Winning ethical concessions from leadership is no easy task. It can involve challenging negotiations and a re-evaluation of the priorities of the business. As Olivia Gambelin points out in a paper of hers: when acting as an ethicist in these negotiations, “you are aware that your points may jeopardise such an enormous profit” and so might be ignored. One of the key attributes of the ethicist is “bravery”, she says. 

“No one-size-fits-all” 

Putting ethics into action is a challenge. In artificial intelligence, the work of an ethicist depends on context. Gambelin points out that “in large companies it’s often a case of moving between teams and asking how decisions are made”. “In smaller companies”, she says, it’s typically about asking: “what are the values you are designing for – values found in company policy, but also wider societal values”. 

Large or small, all organisations hiring an ethicist should facilitate the work they are trying to do. They need to be given the right access and the necessary power. Beena Ammanath, Executive Director of Deloitte’s Global AI Institute, told us there was “no one-size-fits-all” approach to this, and that companies needed to ask: “does the ethicist have the right seat at the table? What level are they sitting at – the data science level or an executive level – to drive business process changes?”  

Indeed, the level at which an ethicist operates shouldn’t limit their influence. Nell Watson said the role should be cross-cutting, allowing ethicists to be an internal “nominal ethical product owner” within an AI development team who has direct communication to C-suite through something like a “red telephone”. 

And yet… 

Although the experts we spoke to recognised the potential value of an ethicist in taking ownership of ethical practices and processes, to get ethics right for artificial intelligence requires more than just one person.  

As Olivia Gambelin suggested: 

“The AI ethicist is just one piece of the AI ethics puzzle.”

Olivia Gambelin
Co-founder and CEO
Ethical Intelligence

Indeed, an ethicist is just one element in the wider web of trust relations that are found across the development of artificial intelligence. As Beena Ammanath told us, the AI ethicist is not the “be-all solution for getting trust in AI”.  

Other experts also pointed to potential pitfalls in the role of the AI ethicist. 

One such pitfall was described by Ray Eitel-Porter, Accenture’s Global Lead for Responsible AI. He spoke about the risk of “siloing off” those who either take on the specific role as an AI ethicist or who form a dedicated AI ethics team. Specifically, some larger tech companies would allocate a role or team to ethics and the rest of the organisation then didn’t worry about it because it was up to that one person or team. “You could see how that could happen, right? You have a department that focuses on this [ethics] but it is somehow isolated from the rest of the organisation.” 

So how can an organisation avoid these pitfalls, address the AI ethics puzzle and move towards more trustworthy relations? 

Eitel-Porter explained Accenture's approach to this challenge of siloing:  

“We at Accenture very much take the view that Responsible AI is a responsibility and a business imperative that has to be embedded across the whole of the organisation and not just within the technology people. We have [ethical] training, not just for data scientists, but for everybody, essentially, who is interacting with data and AI, because we think it's everyone's responsibility to be aware of the role that they have to play, and different people have potentially different roles.”

Ray Eitel-Porter
Global Lead for Responsible AI
Accenture

On a similar note, Lisa Talia Moretti, a digital sociologist at Goldsmiths, University of London, says that doing ethics for artificial intelligence is not a “one-stop shop”: in order to effectively embed it within an organisation, a “cultural shift” is needed. This can take different forms and there are different strategies to enable it, but a key theme runs throughout: responsibility for embedding ethical practices is best diffused across an organisation rather than it landing on just one or two individuals. 

At Accenture, for example, Ray Eitel-Porter’s organisational responsibility is supported by what he calls a “centre of excellence”. The approach here is a kind of “pull mechanism”, he says, with employees being trained in responsible AI to ensure that the “leading thinking proliferates”. It is then possible to check and question throughout a product lifecycle, and if an ethical dilemma emerges it can be escalated to those who have the expertise to address it.  

Training is the crucial component of such cross-organisational approaches to ethical and responsible AI. Dr David Leslie, Director of Ethics and Responsible Innovation Research at the Alan Turing Institute, said the need for upskilling employees to do their technological work responsibly was the “entry cost to the knowledge economy”.  

“The knowledge economy demands a high level of upskilling in order to keep pace with the widened range of consequences that the technologies are opening up.” 

David Leslie, Director of Ethics and Responsible
Innovation Research at the Alan Turing Institute
image

Leslie’s view aligns with that of Nell Watson, who has helped to develop an online course called the Certified Ethical Emerging Technologist, suggesting that there is a clear need for “qualified individuals throughout teams who can operate a technical system ethically”. 

Having multiple teams who understand how ethical dilemmas can be addressed not only facilitates trust among those employees, but it also signals to external stakeholders that AI solutions are worthy of trust because the workflows used to develop them embed ethical deliberation. 

Yet training and upskilling demand time and resources. Gerlinde Weger, an AI Ethics Consultant for the IEEE and change management expert, told us that “the amount of change that’s going through companies now is colossal. If you give an employee one more thing to learn, that’s like the little wafer in Monty Python”.  

One way to prevent staff from overloading while also instilling the kind of organisational responsibility for ethics, suggests Weger, is to integrate ethics into existing procedures. For example, given that “when we are talking about ethics, one of the things we are talking about is risks”, a risk-profile template, which in financial services is mandatory for any project, could embed an ethical dimension.  

Weger has also found that common practices in organisational change management, such as stakeholder and impact assessments, could provide templates for questions about the ethical implications of AI development. “If you can have these [ethical considerations] as additional lenses” to procedures that are already in place, he told us, “then it’s like, oh! It flows”. 

Whether a company integrates ethics into its existing workflows or decides to create new roles for this process, the work is demanding. Ultimately, however, it is an increasingly important requirement and one that communicates how organisations are addressing ethics, leading to greater levels of trust, both internal and external, in the people who steer the development of AI systems. 

image

Looking ahead… 

The role of the ethicist is an important part of establishing trustworthy artificial intelligence. It is also a potential pitfall.  

The experts quoted here have shown that, while some combination of a specific role and a cross-organisational responsibility for AI ethics appears to promote trust in the architects of AI systems, there are different ways to optimise the role and ensure it can foster a widespread sense of responsibility. 

This variation reflects the way in which many companies, especially at the middle and lower end of the maturity spectrum, are still finding their way through a dense and somewhat fragmented ethical AI landscape. Organisations need a steady trajectory to move from a patchwork of principles and frameworks, like those mentioned in the introduction, towards well-established best practice. 

Sectors such as medicine and aviation rely on the consensus and codification of best ethical practices. In the UK, for example, the General Medical Council provides a clear set of standards from which medical practitioners make ethical judgments. In the US, the Air Line Pilots Association ensures the safety of the industry in part through its Code of Ethics.  

Many experts we have spoken to see a need for a similar consensus in the artificial intelligence ecosystem on best ethical practices and how those can drive trust. It is this need that we expand on in the next hypothesis.  

Roles such as head of sustainability and chief sustainability officer are now taken for granted in many organisations. Now that environmental sustainability is a key matter of public concern, the responsibility for sustainable business practices reaches beyond that individual role. Sustainability is more or less normalised, and is affecting the way businesses act on a huge scale.  

Our experts suggest that ethical concerns around artificial intelligence are on a similar trajectory. 

Olivia Gambelin believes that five years from now an ethicist will be employed at every company, just as data scientists are a common role today. With the seeds of ethical concern now planted, the ethicist provides the competencies to cultivate good corporate AI practice. Yet to continue growing the tree of trust – with branches of trustworthy AI architects – organisations must instil a culture that encourages all of its people to think like an ethicist.

image

The key points:

  • The responsibility for embedding ethical practices is best diffused across an organisation rather than it landing on just one or two individuals.

  • All businesses should see the competencies involved in ethical development of artificial intelligence as an “entry-level” requirement for operating in a future economy built around these technologies.