Chapter four: Hypothesis two - Standardisation from diversity

The future of trust in artificial intelligence: responsibility, understandability and sustainability.

Standards throughout ethical AI development can help to cultivate trust through the sharing of best practice

In the past five years the number of published ethical AI principles and frameworks has grown rapidly. From 2018 to 2020 there was a flood of guidelines based on the work of government initiatives, supranational bodies and private companies. 

Algorithm Watch, a non-profit research and advocacy institute, found there were as many as 173 such guidelines by April 2020. Many of these documents are several hundred pages long, but it concluded that there was “a lot of virtue signalling going on and few efforts of enforcement”. Out of all of these guidelines Algorithm Watch found that only 10 had “practical enforcement mechanisms”.  

Since that flood of guidelines, the AI ecosystem has been grappling with how businesses should actually adopt them. A somewhat tired ethical AI catchphrase of the last year on the speaking circuit is: “we need to move from principles to practice” 

The thought here is that saying you are ethical doesn’t justify the kind of trust in an organisation that can result from actually doing ethics. 

Yet the crucial question of how to practically embed ethics into the artificial intelligence lifecycle is complex, even though one literature review suggests that fairly extensive research and development efforts from corporate and academic actors have paved the way here. In our final section of this report we will try to disentangle how some of these practices are moving from prospect to procedure.  

It is important to question how the community can develop an internationally recognised certification of best practices, especially if it is to cultivate trust in what can often feel like an alphabet soup of principles and frameworks.  

Many of the experts we spoke to argued that standardisation of both technical and ethical AI practices was needed.  

In terms of ethical practice, which this section focuses on, standardisation is taking place across three major dimensions; certification of products and services, coalitions of ethics professionals and international standardisation of AI ethics recommendations.  

Standard bearers: the IEEE and ISO  

Nell Watson, Chair of IEEE’s ECPAIS Transparency Expert Focus Group, is close to the leading edge on each of these dimensions. The Institute for Electrical and Electronics Engineers (IEEE) is a well-established technical professional organisation which prides itself on being a “trusted voice for engineering, computing and technology information around the globe”.  

Watson’s work on the IEEE’s newly developed CertifAIed mark seeks to engineer “credit-score-like” mechanisms, drawing on a well-defined set of ethical criteria to safeguard algorithmic trust with certification and standards.  

The IEEE, Watson says, “has an amazing, long-standing pedigree and while ethics is a new element that has only arisen in the last five years or so, it’s a natural outgrowth from electronic standards to practical, actionable ethical standards''.  

“The need for actionable ethics standards is crucial in these times.”

Nell Watson
Chair of IEEE’s ECPAIS Transparency Expert Focus Group
IEEE

She adds that the IEEE mark is the only initiative that is offering this to the market. 

The mark is a risk-based framework based on a number of ethical criteria. To receive it, an organisation will go through a rigorous process with an IEEE assessor, who makes an “initial assessment of the organisation, the technology and the area of operation for particular use cases to decide which criteria suite might be most useful for these particular cases”. 

Although the IEEE mark is still in the piloting stages, Kostis Manolitzas, Head of Data Science at Sky Media, says it has the potential to fill a gap that he feels exists across multiple sectors, even though his own, telecommunications, is “not a high-stakes industry”:  

“I think we need a framework that is consolidated and initially can be a bit more generic and doesn’t have to be applicable in every case, but at least it can cover the majority of the usage of these algorithms. Collaboration is going to be needed.” 

Kostis Manolitzas
Head of Data Science at Sky Media
image

The IEEE is one of two leading standards bodies that are addressing the international standardisation of ethical AI practices. The International Standards Organisation (ISO) has been working on a range of standards since 2017, when, alongside the International Electrotechnical Commission (IEC), it created the expert working group SC 42 to make “headway on a groundbreaking standard that, if accepted, will offer the world a new blueprint to facilitate an AI-ready culture”.  

Drawing on diverse stakeholders and what the SC’s Chair, Wael William Diab, calls a “management system approach”, the group seeks to establish “specific controls, audit schemes and guidance that are consistent with emerging laws, regulations and stakeholder needs”. 

There are as many as 10 published standards under the SC 42 group, which are presented more like the “generic framework” that Manolitzas thinks will be useful. These include guidance and reference documents that outline artificial intelligence concepts and terminology, use cases, big data reference architecture and an overview of trustworthiness in artificial intelligence. 

The standard on trustworthiness reads like a very initial point of reference. It provides a survey of existing approaches to improve trust, a discussion of mitigating AI risks and a discussion on improving the trustworthiness of AI systems. The extent to which it is context specific, and ready for use within different sectors remains somewhat unclear. 

That said, certain implementation aspects are covered by the ISO, with a further 27 standards under development, according to their website, covering topics from machine learning explainability to an overview of ethical and societal concerns.  

Some of these could provide crucial support for companies faced with a wave of regulation and legislation. As Adam Leon Smith writes for the Chartered Institute for IT (BCS), there are other ISO standards in the pipeline that include: “two foundational standards defining the landscape, but also the first standard that will be relevant to the legislation ISO/IEC DIS 23894: Information technology - Artificial intelligence - Risk management”.

This alignment with legislation is particularly vital in light of the recent EU AI Act. The proposed law offers a four-tiered, risk-based approach to regulation. If an application is at the lower end of the risk spectrum then a lighter level of self-assessment, compliance and legal enforcement is needed, while higher risk systems will be subject to heavier, externally audited compliance requirements.  

Smith told us that the price of compliance for companies developing high-risk AI products would be particularly high. He referred to work from the Centre for Data Innovation which estimated the average cost for a European SME using a high-risk system would be as much as €400,000 for the external impact assessment or audit.  

The cost of compliance will also be high because of litigation, Smith said, and he thinks that larger companies will inevitably suffer less:  

“If you're a big firm and you can afford lots of lawyers to spend months agonising over it, you can probably come up with a way of complying... Of course, litigation is not really an acceptable approach if you're an SME; you can't really be spending lots of money on technology lawyers who are very hard to come by.”

Adam Leon Smith
CTO
Dragonfly

Yet he also believes that technical and ethical standards can help reduce some of the financial burden of legal and regulatory compliance. The range of standards offered by the ISO in general, and its standard on AI risk management in particular, could provide a valuable barometer for anticipating new legislation and a means of minimising the cost of compliance. 

But as draft legislation, the EU AI Act does not yet provide much help for companies to get ahead with their compliance. “You can’t really comply with it [the EU AI Act] until you've seen the standards that are going to be written – but these haven’t been written yet,” Smith told us. 

Calls for a coalition 

So what might it take to bring about consensus on best practices and standardisation? 

Collaboration can provide momentum for standard-setting initiatives such as those from the ISO or IEEE. Historically, clusters of professionals have come together towards a common ethical goal in the face of concerns about emergent harm or risk. Seán Ó hÉigeartaigh, Director of AI: Futures and Responsibility Programme at the University of Cambridge, has called such clusters “epistemic communities” – basically a network of experts in a particular domain who seek to share knowledge. 

During the Cold War, for example, “a community of arms experts helped shape cooperation between the USA and Russia… by creating an internationally shared understanding of nuclear arms control”, writes Ó hÉigeartaigh. 

The risks posed by AI may not be comparable to those of nuclear fallout, yet the overarching safeguarding and mitigation of risk that such a coalition might provide would clearly be beneficial.  

However several experts we spoke to suggested that the community of AI ethics professionals often appears to be frustratingly siloed. In particular, Olivia Gambelin called for an international coalition of best practice among ethics specialists: 

“We are part of a coalition… of ethics firms, because we all came together and we're like, we literally all do the same process. We call it a different thing and each of us has a different spin on it. But it really is the same process. And none of us are coming together and actually collaborating because it is really a mix of people from independent firms, then you have the non profits, and then you have the people within the organisations. And we find each other and we get really excited and we exchange a few thoughts and we exchange, like, how do you approach this problem? How are you doing this?”

Olivia Gambelin
Co-founder and CEO
Ethical Intelligence

Olivia adds: “We don't necessarily have industry standards yet. But we feel the need for it.” 

The coalition of ethics professionals is still nascent. But, promisingly for public trust, ethics-based accreditations are opening up for data science professionals. 

In 2020, the Royal Statistical Society (RSS) indicated that it would be working with a number of other professional associations to develop industry standards for data science professionals, to “ensure an ethical and well-governed approach so the public can have confidence in how their data is being used”. Moreover, the newly launched Association of Data Scientists now offers a Chartered Data Scientist qualification, which the association describes as the “highest distinction in the data science profession”.  

Much like the trust promoted by a hybrid approach to AI ethics that combines the role of the ethicist with a cross-organisational responsibility, these certification, standardisation and coalition developments have the potential to build a bedrock of public trust in those shaping the artificial intelligence ecosystem. 

And yet… 

While accreditation, certification and professional coalitions may help bind relationships of trust, several of the experts we spoke to believe that for standardisation to be successful at a global level, the community needs to find a way to reach an international consensus from a broad diversity of opinions and approaches. This is no mean feat.  

The need for diversity: being sensitive to context

“For AI ethics, there isn’t necessarily one approach – there is a need for a diversity of perspectives.”

Ray Eitel-Porter
Global Lead for Responsible AI
Accenture

This comment by Ray Eitel-Porter is at the heart of the conversation on standardisation. On the one hand, as David Leslie suggested, with ethics, trust and AI “we are facing universal problems and so there is a need for global-level recommendations”. Yet on the other hand, what is ethical and what enables trust can vary according to different cultural values. Leslie reminds us that we need to think about how we account for Ubuntu or other relational value systems that millions of people the world over believe to have meaning. 

The question for the ethical AI community is how can a diversity of perspectives be included to reach universally actionable international standards? 

One obstacle stems from a point made by Leslie: that the “global AI innovation ecosystem is dominated by the Global North” – the conversation on both ethics and trust is overshadowed by North American and European players.  

This has led to what Emma Ruttkamp-Bloem, Chairperson for UNESCO’s Recommendation on AI Ethics, called an “epistemic injustice” in the Global South. When we spoke to her, she was passionate about the need to make universal ethical principles work in local contexts. 

So how can this challenge be addressed? David Leslie has recently been working as the lead researcher on a project, PATH AI, which is exploring “different intercultural interpretations of values such as privacy, trust and agency” and how those “can be applied to new governance frameworks around AI”. 

Although the project is only at the research and consultation phase, it seeks to shape “the international landscape for AI ethics, governance and regulation in a more inclusive and globally representative way”. 

What PATH AI sets out to do echoes what UNESCO has done on policy with its Recommendation on AI Ethics. This brought together philosophers, lawyers, diplomats, practitioners and UNESCO’s Secretariat in a group of 24 people to represent the six UNESCO regions. There were consultations for each region and with young people. The results were revised after input from intergovernmental institutions then submitted to member states for a “landmark” diplomatic negotiation, Emma Ruttkamp-Bloem told us.  

Such an agreement is needed, she says, because of “the multinational status of the big tech companies, meaning that international laws and principles are key”. But perhaps what it demonstrates is how a shared understanding of ethical best practice can be established through an internationally representative and culturally sensitive process. It’s proof, Ruttkamp-Bloem says, “that this kind of collaboration is possible… and it helps countries in the Global South to at least have something to guide them”. 

The Recommendation provides a rigorous set of principles which can then be applied in particular languages and cultures. Yet Ruttkamp-Bloem makes it clear that protecting everyone equally from harm is a key objective.  

On the principle of privacy, for example, she observes that although it may be against local values in African communities, in which a collectivist culture and ethic may make people more open about their private lives, this does not mean those people do not have a fundamental right to privacy. “There has to be sensitivity to cultures”, she says, “but there has to be the same protection against harms for everyone.” This is what the UNESCO Recommendation provides. 

Balancing universal rights with specific cultural values may seem like a massive task for private sector enterprises, especially those who are just starting out on the artificial intelligence journey. Yet Ruttkamp-Bloem has a few suggestions about how commercial organisations can realise business value through policy: 

“In the end it is a strategic move to be ethically sensitive and culturally aware. You can do this by ensuring your tech team is diverse to help naturally detect biases, and respect your clients and their demands and rights by ensuring transparency.” 

Emma Ruttkamp-Bloem, Chairperson
Ad hoc Expert Group, UNESCO
image

She adds that a key ethical mindset that companies can adopt is to try and “meet people where they are”. Beena Ammanath raised a similar point: “We have to get down from this top level and get down in the weeds to figure out how this is going to happen – that’s part of the operationalisation, making it context-specific.”  

She described how this tension between universal standards and context specificity could play out:  

“With regulations and standards and best practices there is a reason we have different ones for different industries… We will start with broad ones, and then there's also movement going on from the bottom up – so we will have specific content for specific application areas.”

Beena Ammanath
Executive Director
Deloitte AI Institute

For Ammanath, it makes sense that movements towards standardisation and consensus start with broad reference points such as the UNESCO Recommendation, but she also thinks sector-specific standards will emerge from the bottom up.  

When we asked her whether she thought the EU AI Act would have content and sector-specific application areas, she replied: “Absolutely”, telling us that standards would inevitably take a similar approach. 

The approach, then, Ammanath says, is “you start from a broad overarching umbrella of do-no-harm, and then you ask ‘how do you actually make it real within a bank versus within a hospital?’” 

image

Looking ahead… 

In sustainability, global standards have been around for at least 30 years – from ecolabels and organic food labels to social welfare standards that aim to protect workers in ‘sweatshop’ factories. Today there are as many as 400 established standards and certifications, according to one NGO that demonstrate the sustainability performance of organisations or products in specific areas. 

Evidence suggests such standardisation is further off for artificial intelligence, with the development of a technical standards hub for AI having only been recently announced by the UK government 

Yet our research has also led us to believe that a mature set of actionable standards and benchmarkable certifications, such as those of the IEEE, provides a helpful guide for best ethical practices in the development and deployment of artificial intelligence. 

Even though the path to standardisation still seems uncoordinated and the uptake of standards and certifications is still in the pipeline, a wave of regulation is breaking. 

The EU AI Act provides a codification of the trustworthy AI paradigm which, according to Mauritz Kop of Stanford Law School, “requires AI to be legally, ethically and technically robust while respecting democratic values, human rights and the rule of law”.  

The cost of compliance with what seems like imminent AI legislation will be financially unsustainable for many companies; to keep pace with the changes, however, standardisation can reduce the burden. 

While regulations don’t fully mitigate ethical risks and harms, and although legal enforcement has not yet arrived, the set of proposed standards and certifications outlined in this section will give organisations a means of sense-checking their AI practices. It is likely that some of these, such as those developed by the ISO IEC SC-42, will become the gold standard that reflects the EU AI Act. 

For now, what might seem like quite generic standards, such as those from the ISO covering trustworthiness or risk management, will at least help companies to align with initial regulatory moves such as the EU’s trustworthy AI paradigm and to anticipate later waves of harder legislation to come. 

image

The key points: 

  • The time has come for actionable ethics standards on artificial intelligence; there are a few organisations leading the development of such standards. 
  • Businesses have the opportunity now to prepare for an imminent wave of legislation. 
  • There is a business advantage to anticipating these developments while also being mindful of culturally diverse perspectives and inclusive ethical practices.