From ROI to responsibility: What business leaders need to know about AI governance

Date posted
11 April 2025
Reading time
3 minutes

In an era defined by exponential technological change, few developments are creating as much momentum - and uncertainty - as artificial intelligence. The latest episode of Beyond Boundaries, the Kainos podcast hosted by Chief AI Officer Gareth Workman, dives headfirst into a question that many business leaders are grappling with: how do you harness the power of AI while staying accountable, compliant and trusted?

To explore this topic, Gareth is joined by Ryan Donnelly, founder of AI governance platform Enzai. Their conversation is a compelling look at the strategic risks and opportunities leaders must consider when deploying AI at scale - from reputational and regulatory risk to the need for real business outcomes.

Beyond the hype: where’s the value?

AI's rise has been dizzyingly fast. “It does feel like sort of a modern industrial revolution,” Ryan notes. But that sense of excitement has also, at times, created a vacuum around business outcomes.

“There was this moment where people were exercising the technology for technical purity… but it was never tied to an outcome or an improvement,” Gareth observes. “It was almost an exercise to prove the art of the possible.”

According to Ryan, the return on investment is finally coming under proper scrutiny. “I think we're probably at a stage now where there's a lot of questions being asked. What is our return on investment here? What are we getting for our money?”

In short: the experimentation phase is over. Businesses are now being challenged to connect their AI ambitions to measurable, mission-aligned goals.

image

Governance is good business

For leaders navigating this shift, accountability can’t be an afterthought, especially as AI becomes more autonomous and embedded in customer and citizen interactions.

When Gareth asks how organisations can move from ‘checkbox compliance’ to genuine accountability, Ryan offers a grounded perspective. “Check boxes are not all bad… But they alone are not enough. You need trust – and that’s trust from people within your organisation, from regulators, and from end users."

Transparency, he says, is key - but only if it’s pitched at the right level. “There’s no point in putting out a hundred-page technical document… You need to provide enough information to help people understand what these systems are doing, how they’re doing it, and what kind of data they’re working with.”

Gareth agrees: “Once you've written down the outcome you want to achieve and realise this is going to impact individuals, the questions around trust and responsibility start to surface.”

Regulations are coming – and that’s a good thing

With AI regulation progressing in Europe, the US and beyond, global companies are facing a fragmented and rapidly changing compliance landscape. But Ryan believes the direction of travel is both clear and largely positive.

“When it comes to AI, there’s just no doubt it needs laws and standards. It's one of the most powerful technologies we’ve ever developed.”

He cautions that while the regulatory frameworks, like the EU AI Act, are complex, their principles are sound. “Well-designed regulations are tremendously helpful. They’re an overwhelmingly positive thing – they help set us up for success.”

For companies unsure where to begin, Ryan keeps his advice practical. “Keep an inventory of the different AI systems you’re using. Subject them to risk assessments. Identify the risks and then take steps to mitigate them. Start somewhere – and start today.”

Inaction is the greatest risk

The key message for leaders? Waiting isn’t an option. “Doing nothing is definitely just not an option. It’s worth repeating,” Ryan says.

Whether you’re just beginning your AI journey or scaling up, the call to action is the same: get started, be intentional, and put in place the right frameworks to grow responsibly.

As Ryan puts it, “This is a really unique opportunity that we're living through. Keep that optimism – but put the right guardrails in place to make sure you're getting the most out of it.”

Common pitfalls for AI adoption

As business leaders move beyond AI experimentation and into operational deployment, a number of common mistakes are becoming clear:

  • Treating governance as an afterthought
    Compliance checklists alone aren’t enough. Trust and transparency need to be built in from the start.
  • Delaying action while waiting for regulations to settle
    Regulatory clarity is coming – but inaction now will only increase future risk.
  • Assuming AI risk is purely a technical issue
    AI decisions increasingly affect people. Ethical and organisational questions are just as critical as engineering ones.
  • Failing to communicate clearly
    Stakeholders don’t need a 100-page technical spec – they need understandable, relevant explanations of what systems are doing and why.

Recommendations for leaders

Ryan and Gareth offered several practical takeaways for business leaders looking to build momentum responsibly:

  1. Start with visibility
    Build and maintain an inventory of your AI systems. Understand what you’re using, where and why.
  2. Tie AI to clear outcomes
    Don’t deploy AI for its own sake. Align your investment with measurable organisational goals.
  3. Make transparency work for your audience
    Tailor your communications so that end users, regulators and internal stakeholders can understand your systems at the right level.
  4. Treat governance as a business enabler
    Done right, governance doesn’t slow you down – it provides the clarity and confidence needed to scale responsibly.
image
WATCH NOW

S1 E3: Responsible AI: Leading with innovation and integrity

Gareth Workman and Ryan Donnelly discuss trust, accountability and responsible AI leadership in a fast-evolving landscape.

Watch now

Never miss an episode

Sign up for episode reminders, exclusive thought leadership and practical advice to navigate AI’s biggest challenges.