An ethics-by-design approach to AI: A guide for technology executives

Get best-practices tips on implementing AI responsibly, so you can develop a robust ethical framework for innovation.
Date posted
30 January 2025
Reading time
5 minutes

AI is helping organisations become more efficient, insightful and innovative. But in the race to adopt AI technologies, many overlook the ethical complexities – from algorithmic bias to data privacy breaches and transparency issues.

However, focusing on ethics is critical for building trust. According to McKinsey, 72% of customers want transparency on a company's AI policy – and organisations that become digitally trusted are 1.6 times more likely to see 10%+ growth rates.

For Chief Data Officers (CDOs) and Chief Information Officers (CIOs), the ethical challenges surrounding AI implementation have the potential to impact brand reputation, compliance and long-term business sustainability.

So how can you strike a balance that enables you to drive long-term business value while navigating the ethical risks in AI implementation?

The solution: An ethics-by-design approach

To address these challenges, it’s crucial to prioritise ethical considerations at every stage of the AI lifecycle. This approach, known as ethics-by-design, ensures ethical principles are embedded in every element of AI development and deployment.

Here’s how to get started:

image

Step 1 – Don’t be tempted to focus on short-term gains

“We know we need a cohesive AI strategy, but short-term initiatives keep taking precedence” is something we often hear.

However, without proper precautions, short-term AI projects can lead to biased, opaque or privacy-invading algorithms – which have the potential to cause reputational damage reputations and incur hefty fines.

In other words, a short-term approach can stunt long-term value. To achieve that long-term value, you need a big-picture approach.

Step 2 – Define guiding ethical principles to inform your AI approach

An ethics-by-design approach involves asking critical ethical questions from the beginning of the AI innovation lifecycle. These include:

image
Is AI really the best solution to the problem?
Icon featuring three people
What impact could this AI system have on stakeholders?
image
What’s our definition of fairness in this scenario?
image
What should our bias audit approach look like?
image
How can we ensure our AI development team is diverse and inclusive?

Use your answers to define a set of guiding principles that reflect your organisation’s values. Address key areas such as fairness, transparency, accountability and privacy. Ensure you revisit these principles at every development phase, utilising them as ethical checkpoints that incorporate impact assessments and bias audits.

Bear in mind that diversity in AI development teams is critical to identifying and mitigating biases. By involving people with varied perspectives and backgrounds, you’ll enhance your ability to anticipate potential ethical issues and design more inclusive solutions.

image

Step 3 – Ensure your principles align with legislation and regulatory guidelines

While establishing guiding AI principles, your organisation’s approach must align with regulatory guidelines. For example, the recently introduced EU Artificial Intelligence Act applies to organisations operating in or marketing to the EU – and breaches can result in hefty fines.

In the UK, the King's Speech (of July 2024) discussed establishing appropriate legislation to place requirements on the developers of the most powerful AI models. At the same time, the Department for Science, Innovation and Technology launched an AI Action Plan to foster growth in the AI sector.

While evolving, the UK government's approach highlights a vision to reap the benefits of AI while keeping ethics central to development. By empowering regulators to guide and inform AI adoption, the government’s approach encompasses ethical principles spanning safety, security, transparency, fairness and more.

Step 4 – Adopt an ethical AI framework to streamline your approach

Once you’ve established your guiding principles, it’s time to implement them. But to optimise this process and pave the way for long-term sustainable AI innovation, it’s vital to devise an ethical AI framework that incorporates:

image
Strong security protocols
To protect AI systems and data – essential for maintaining trust and compliance.
image
Regulatory checkpoints
To ensure your AI initiatives adhere to relevant standards to avoid legal and ethical issues – for example, ISO 42001, the NIST AI Risk Management Framework, the EU AI Act and UK regulatory guidance.
image
Testing and validating AI models
This will ensure they perform as expected and allow you to identify and address any issues early on.
image
Fairness checkpoints
Including bias audits and potential bias mitigation steps.
image
Impact assessments
For example, privacy reviews to ensure compliance with data protection regulations and stakeholder engagement consultations to understand concerns.
image
Transparency documentation
Clearly documenting how the AI system was built, how it is expected to be used and its limitations.
image
Response plans
For incidents arising from AI systems, so you can tackle problems quickly and effectively.

Step 5 – Scale responsible AI delivery

An AI ethics framework should form part of your broader ethics-by-design strategy to scale responsible AI delivery.

From defining your AI ethical goals to implementing your framework, integrating it into governance structures and creating an AI policy, this strategic approach will not only drive business value. By keeping ethics at its core, you’ll also mitigate risks and potential negative impacts.

image

For example, using your AI framework to expand existing governance structures will ensure you can manage and monitor AI risk at scale, just as you would for cyber security and data privacy. By creating an AI policy, you ensure delivery teams always understand your framework and can embed it into their usual ways of working, as there are clear expectations around AI use and development.

Finally, remember that AI systems evolve. To stay ahead of the curve when it comes to ethical challenges, it’s important to regularly monitor AI performance – updating models when required and remaining vigilant for emerging risks.

Transform your approach with ethics-by-design AI

From building trust with customers, employees and regulators to creating a strong foundation for sustainable growth, organisations that prioritise ethics in their AI adoption are best positioned to navigate complex regulatory landscapes and avoid costly penalties.

Moreover, by encouraging teams to think critically, ethical AI also fosters innovation – with the potential to uncover new opportunities and improve overall system performance. And you can then more effectively move beyond AI experimentation to generate business value, achieving a range of benefits – from data-driven decision-making to enhanced quality control.

Ready to generate business value from AI?

Searching for strategies to propel your organisation into the future of AI with the adoption and scaling of responsible AI solutions?

Save your seat at one of our upcoming AI events in partnership with Microsoft. You’ll gain insights and practical strategies to help your organisation move beyond AI novelty to generate business value.