The value of AI Ethics in the dawn of AI regulation
On Friday 8 December, the European Union (EU) agreed to a deal to regulate artificial intelligence (AI). The exact wording of the final Act is still unknown, but the tenets have been communicated. Implementing a risk-based approach to rules, the Act will consider an AI system’s potential to cause societal harm and categorise it as minimal risk, high risk or unacceptable risk. In the case of an AI system classified as high risk, there will be strict requirements for providers and users (or ‘deployers', as users are called in the EU AI Act) alike. For providers, this includes subjecting the AI system to a conformity assessment to demonstrate their system complies with mandatory requirements for trustworthy AI.
The EU AI Act will have a wide geographical reach; it will impact anyone wanting to deploy an AI system to the EU market, from both inside and outside the EU. With potential penalties of up to €35 million or 7% of total worldwide annual turnover, organisations will need to prepare to meet these substantial requirements.
In the absence of more formal rules, AI ethics has thus far played a leading role in guiding the development of trustworthy AI. Now that trustworthy AI principles are becoming hard-coded in the law, experience in operationalising ethical principles will bring significant value in this transition to compliance, and beyond.
AI ethics before compliance
Over the last few years, the field of AI ethics has flourished. A multitude of AI ethical frameworks were released by public and private organisations alike, articulating the principles and values that are important to consider when developing AI responsibly. In a review of 200 such guidelines, Nicholas Kluge Corrêa and colleagues concluded that the top five most cited AI ethics principles are (i) transparency/explainability/auditability, (ii) reliability/safety/security/trustworthiness (iii) justice/equity/fairness/non-discrimination, (iv) privacy and (v) accountability/liability.
These very same themes can be found reflected in the EU AI Act:
AI Ethics Principle | EU AI Act* |
Transparency / explainability / auditability |
High-risk AI systems will need to be made traceable, auditable, and documentation must be kept. Specific transparency requirements are imposed for certain AI systems when there is a clear risk of manipulation (e.g. via chatbots) |
Reliability / safety / security / trustworthiness |
Providers of high-risk systems will need to implement quality and risk management systems, and ensure regular monitoring |
Justice / equity / fairness / non-discrimination |
High-risk AI systems will need to be trained and tested with sufficiently representative datasets to minimise the risk of unfair biases and ensure that false positive/negative results are not disproportionately affecting protected groups |
Privacy |
While privacy is already regulated under the EU General Data Protection Regulation (GDPR), we see reference to data protection, among others, in the acknowledgement that fundamental rights impact assessments of high-risk AI systems may be combined with data protection impact assessments |
Accountability / liability |
Human oversight measures of high-risk AI systems will need to be described as part of a fundamental rights impact assessment |
*Source: European Commission (12 December 2023), Artificial Intelligence – Questions and Answers
Practically, this means that those who have built out their AI ethics frameworks will be a step ahead when it comes to implementing the EU AI Act. Especially when combined with a mature Machine Learning Operations (MLOps) infrastructure to address needs around data quality assurance, traceability and cybersecurity, implementing the Act should feel like an expansion instead of a novel set of rules. AI ethics forms a strong foundation for compliance.
AI ethics in support of compliance
For those still early in their journey of achieving trustworthy AI, the good news is that some of us have been working at it for a while –we have gained experience, built resources, and levelled up our expertise.
AI ethics does not stop at articulating principles. Various techniques have been tried and tested and there are tools we can leverage, such as model cards to increase transparency and guidance to help address fairness in AI models. Fundamental rights impact assessments describe the steps we have already been taking to assess the potential harm of AI systems in ‘harm workshops', which we run iteratively when developing AI, to identify potential harm and mitigations.
AI ethics expertise can help organisations bring the EU AI Act to life. It can support data scientists and indeed lawyers to come to grips with the requirements and understand the lay of the land.
AI ethics beyond compliance
In other words, organisations who have been working on AI ethics have a head-start on compliance. AI ethics expertise can prove critical in upskilling governance and development teams to meet the requirements set out in the EU AI Act. For those who have the stomach and vision: there is an opportunity beyond even comprehensive regulation like the EU AI Act to address through principles and ethical considerations.
The EU AI Act is predominantly designed to address a limited range of high-risk use cases – whereas arguably all AI use cases can benefit from at least basic considerations around transparency, fairness and accountability. The Act lists some applications of AI as ‘unacceptable’ and aims to have fundamental rights impact assessments inform critical thinking about harm of high-risk use cases. In our experience, such an analysis can prove challenging at times where interests overlap and some cases will require special attention and debate. More mature companies have set up AI ethics councils to discuss AI ‘edge’ use cases, where an impact assessment may lead to conflicting insights. In addition, in a space that has seen new techniques and models released almost weekly in the last year, critical, ethical thinking inevitably remains important where the law cannot pre-empt all new developments.
At Kainos, we have built AI ethics frameworks contextualised to public sector departments and embedded responsible AI principles in the delivery of generative and predictive AI solutions in both the public and private sectors. Our team, consisting of more than 150 Data and AI experts, has supported organisations including the United Nations, HMLR, Ministry of Defence, and Hello Fresh. We are also partnering with academia to develop new, evidence-based AI ethics solutions.
Co-authored by