Beyond Boundaries S1 E3 - Responsible AI: Leading with innovation and integrity
Responsible AI: Leading with innovation and integrity
In this episode of Beyond Boundaries, Gareth Workman, Kainos' Chief AI Officer, is joined by Ryan Donnelly, Founder of Enzai, to explore one of the most pressing challenges in AI today: ensuring trust, accountability and compliance in an era of rapid technological advancement.
As AI systems become more powerful and embedded in business operations, questions around responsibility and transparency take centre stage. Who is accountable for AI-driven decisions? How can businesses move beyond compliance to foster real trust? And with global regulations evolving, how can leaders stay ahead while continuing to innovate responsibly?
Join us as we break down the complexities of AI governance, discuss the latest regulatory shifts, and uncover practical strategies for navigating this ever-changing landscape.
The full episode transcript is available here.
Watch here:
Listen here:

Sign up for episode reminders and exclusive thought leadership
Transcript
Teaser
Ryan Donnelly
Start somewhere and start today, because these technologies are not going away. They're definitely not going back in a box. They will have, and they already are having, a tremendous impact on how we do business around the world, how we live our lives. Make sure that you're at the forefront of this. We live in a time of, like, just exponential growth - and if you sit still for any length of time at all, you get left behind real quick. Doing nothing is not an option when it comes to these tools.
Start of episode
Gareth Workman
Welcome to Beyond Boundaries, the podcast brought to you by Kainos, designed to help business leaders navigate the fast moving world of AI. I'm Gareth Workman, Chief AI Officer at Kainos - and today, we're tackling one of the biggest challenges facing business leaders right now. And that's how to embrace AI's potential while ensuring trust, accountability and compliance in an increasingly complex landscape.
AI is advancing rapidly. That's creating exciting opportunities for innovation and transformation, but with progress comes new challenges, especially around responsibility, transparency, and navigating evolving global regulations. So as AI systems become more capable, questions naturally emerge. Who's accountable for AI's decisions? How can organisations move beyond compliance to earn real trust? And with global regulations evolving, how can businesses stay ahead and keep innovating responsibly? So, to help us navigate all of this, I'm joined by Ryan, founder of Enzai, who are at the forefront of AI governance. Ryan, welcome to Beyond Boundaries. It's great to have you with us today.
Ryan Donnelly
Aw, it's a pleasure to be here. Thank you very much for having me.
Gareth Workman
Fantastic. So, AI is advancing at unprecedented pace. So, what excites you most about the trajectory and how does Enzai contribute to shaping responsible AI adoption?
Ryan Donnelly
Yeah, I think the pace of it, first of all, has caught so many people a little bit by surprise at just how quickly this is - first of all, how good it is, and then how quickly it's getting even better. That is tremendously exciting. It does feel like sort of a modern industrial revolution in the sense that it's got so many opportunities to kind of really impact the way that we live our lives, which is great. And it's a really exciting thing to live through, I think.
But then also, of course, with that comes, as you mentioned in your introduction, a lot of potential risks as well. When the steam engine came in, it was tremendously dangerous until we figured out ways to put the right sort of guardrails up around it and get the most out of the technology. And I think we're going through something very similar here now with AI. So that speed is really, really exciting. But now it is a real challenge to stay at the forefront.
Gareth Workman
No, absolutely. The advancements here are just phenomenally fast, even compared to, kind of, in terms of the industrial revolution. It's almost 100 years squashed into 10 potentially. Sometimes, you'll hear people say like, you know, AI adoption is often this grey area and is often described as the 'wild west', the pace of movement. So here's a question for you. What are the biggest risks that companies are overlooking right now? So there's the obvious things they're looking at. What do you think are the things they're really missing?
Ryan Donnelly
Yeah, so this is interesting, right? There is the very, what I think should be, very obvious stuff - although you'd be surprised how often it isn't. The ROI question for a long time was kind of on the back burner as people got really excited in terms of rushing into this, investing very heavily in these tools in all kinds of different directions. I think actually right at the start, ROI considerations, people would have said that they were certainly looking at those kind of things. But really, were they?
Because it was so exciting to invest in technologies and see what they could do for the business. And that is cool, because ROI doesn't always have to come in the first week, the first month, the first year. But I think we're probably at a stage now where there's a lot of questions being asked, right? What is our return on investment here? What are we getting for our money? So I'm seeing that, to be honest, become a more more frequent consideration as people think, right okay, how do we actually track the money that we've spent on this? And how do we actually track the results?
So that is one risk. It's very obvious. And then there's sort of like a range of different risks. And it really all depends from there kind of on the system and the nature of your business. Unfortunately, I'm not going to give you one like magic bullet type risk that everyone should think about in every circumstance. But things like, you know, I think a lot about sort of reputational risk. Whenever you've got very powerful tools doing very amazing things, you've got to have ways of making sure that you protect your reputation that you've built as an organisation, as a brand, because it's so hard built, but so easily lost. And you've definitely seen that. We've all seen some of the examples where large organisations, their reputation and their brand has taken a hit because of AI systems that they just didn't manage properly and probably didn't have the right governance around as well. So that's one.
There's all the kind of, depends on the nature as well, you know, in some the systems that are dealing with real people's lives, there are genuine and deep fairness and bias concerns and risks that you've got to address proactively when using these tools or building these tools. And then there's other sort of legal regulatory risk and many others. So there's kind of like a wide range of risks.
And actually, I think if you just focused on one risk alone, you would be doing yourself a disservice because you'll miss something else and all of those other risks could, adding together, could end up being bigger than that first big risk anyway. So you've got to do it all.
Gareth Workman
Yeah. I think your point about the ROI thing was really interesting because a lot of people, for the first while, it was almost like exercise the technology for technical purity. Can you do something? But it was never tied to an outcome or an improvement. It was almost as an exercise to prove the art of the possible. And then it's like: Okay, now we've done that. Where's the value? And they've had to almost go back and think, okay, how is this positively going to impact my business or my customers or citizens or whatnot?
Ryan Donnelly
Exactly. And I think with it, with the start of a new technological revolution, you don't have to think about those things upfront. You know, the potential of technology was pretty obvious early days, so you're okay to do a lot of this and worry about, the kind of like return that you're going to get from it down the line and experiment because some of the greatest ideas come out of experimenting, you know. I don't think the Wright brothers were thinking about ROI whenever they designed their first plane, you know, so that's okay.
But I think AI generally, it depends how you define that. It's been around for a while, right? And the generative AI stuff that's kind of really taken hold, again, has now been around for a couple of years. So I think it is right to start thinking about, actually, what's the end result here? What is the business goal that we're trying to achieve now? How do we actually put some proper guardrails on this so that we get the right results?
And those who are doing that are getting the results actually. So thinking about these things, I always kind of use this analogy, I think about, like, obviously we're a startup, right? So we think a lot in the startup world about speed and you've got to have a lot of speed. But as we've learned, occasionally to our detriment at times, speed in the wrong direction is irrelevant and actually can be very damaging. You need to have speed. Speed alone is not enough. And it's the same with AI, your AI program and large enterprise.
Moving quickly to adopt these tools alone is not enough. You need to have direction on that. And if you put some direction on this, and if we do that as a startup and we put some real direction on our business, then you create velocity and then velocity actually becomes self-fulfilling. It's like a hamster wheel. And the quicker you get, the faster the whole thing goes. So having that direction as well, in sort of a, you know, in modern enterprise, like a lot of that direction is going to be set by things like business outcomes and goals that you're trying to achieve there. So having that stuff and actually really understanding that as well now, I think we're at the stage where that is a critical consideration that is still, unfortunately, being a little bit overlooked, in my opinion.
Gareth Workman
I completely agree, Ryan. I think the bit is whenever you write down your direction, and as you say, you're starting to build velocity, then you start to ask yourself, once you're in technical purity piece, it's less of a challenge. But whenever you've written down the outcome of you want to achieve and goes, this is going to impact individuals, then the questions start around things like trust is critical for AI adoption. So for you, what does it take to make AI truly accountable? How do new organisations move beyond that checkbox compliance to actual real responsibility in terms of their actions?
Ryan Donnelly
Yeah, so actually, very quickly, an ode to the checkbox, right? I can see why things are sometimes labeled as a 'tick box exercise' and in a pejorative sense to say, hey, it's just a tick box thing. And tick boxes are better than no boxes, quite frankly, and actually really well designed tick boxes can be tremendously effective, okay? They're a really effective tool that we've got. There's a book called The Checklist Manifesto that if you haven't read, I'll lend you, Gareth. It is a fantastic book in aid of checklists and what they've done for the world and how they've actually revolutionised really dangerous industries. Great stories about how pilots use them and then they would roll out across hospitals.
Check boxes are not all bad, okay? In fact, they can be pretty good. But they alone are not enough, right? And you need trust, right? And trust in these systems. And that's trust from many different angles, okay? That's trust from people within your organisation as you build and adopt these tools. That's trust from the wider ecosystem of government and regulator, that's trust from end users who may have very little understanding of how these tools work or what they're doing, but are living their lives. And how do you create trust in that kind of ecosystem? I don't have all the answers for that. I do think well-designed checkboxes are at least a good place to start.
Other things that you can do that, in my experience, can help with that is transparency. So trying to provide enough information to enable people to understand what these systems are doing, how they're doing it, what kind of data they're working with and pitched at the right level of abstraction, right? There's no point in, putting like a bunch, like a hundred page technical documentation on how, even like the foundational model system cards, they're great. They're really interesting, but actually they're pretty inaccessible to most people. So how do you pitch that at the right level of abstraction so that most people can understand it?
Getting that right, I think, can really help. But there's, again, like most of the things in this space, I don't think there's any sort of magic one bullet that you can do this and be completely assured that, hey, you're going to get trust. It's a combination of things. And it's built over time in terms of how you as an organisation operate and use these tools. But it's a tremendous currency, actually. And it's becoming more and more important, I would say, by the day in this modern enterprise world where everything is so interconnected.
The world runs on trust. It really does. And trying to establish that and having a really structured strategy to be able to establish that is going to be tremendously effective in the AI world.
Gareth Workman
Yeah, so obviously, fabulous book. I have also been enlightened by the checklist and, you know, the bit around the planes. It's been a while since I've read it, but, one was: When you're in danger, fly the plane. You know, it's kind of that, simple, kind of piece of, what are the obvious things you need to be doing? As it were.
Ryan Donnelly
Yeah. Yeah, I'm certainly not a checklist hater. I'm actually a huge fan of it. I think they're not perfect for every situation. Don't get me wrong. And badly designed check boxes are useless. I agree. But if you can have some decently designed ones, they can get you very far. They really can.
Gareth Workman
Yep. One of the things that's well recognised now is that AI regulation is evolving, but it lags behind the innovation. The pace of advancement is just phenomenal. So in terms of today, what do you think leaders and innovators should be doing to try and strike that balance so that they're maybe not one way too far in terms of kind of waiting on regulation or else pushing the boundaries too far and that actually they're potentially in breach? How do you find a way to strike that balance?
Ryan Donnelly
Yeah, I've looked into the history on all this. I've gone down rabbit holes actually trying to figure this out because naturally regulations are normally quite reactive rather than proactive. There are definitely proactive attempts to kind of regulate things in the future and assumptions that regulators will sometimes make of scenarios that may arise and they try and cater for them. But on the whole, and particularly when comes to new areas, they are reactive, as in the technology is introduced, something very crazy happens, and regulators around the world decide they don't like the outcome there and want to try and prevent that outcome in the future. And here's the way that they think that they're going to do that.
It normally takes a lot longer for regulations to follow big technical innovations. So I think actually in the grand scheme of time, like this is probably the quickest that we've ever seen regulators come out with really comprehensive rules, laws around how you use, deploy, build AI.
And I'm thinking specifically there of the EU AI act, because I mean, the first proposal, the white paper on that was like, what was that 2018, 2019? That came out. Really wasn't very long after GDPR. And as they were drafting this, the whole sort of generative AI wave took off as well. So they had to update the draft to include provisions on general purpose AI. So it is probably one of the most proactive I've seen.
But to many people's point, I mean, we live in a time of like exponential growth. So it still isn't quick enough in this day and age. I don't think I've got an answer for that, but I will say like, there's two schools of thought in this really. And you, you can see a lot of the differences between sort of a traditional European mindset and maybe a, you know, Californian and Silicon Valley tech mindset.
The traditional European is like, we need to have rules, we need to protect consumers. The sort of traditional Silicon Valley, California, is we need to move fast. And rules in general are just bad. We don't need them.
I don't think either of those positions are fully correct, actually. And I think, like most things, the truth ends up being somewhere in the middle. Well-designed regulations are tremendously helpful. And they are an overwhelmingly positive thing.
Gareth Workman
Yeah.
Ryan Donnelly
Now, where this whole space gets a bad rep is, there are lots of laws enforced today that should be scrapped because they are unworkable, impractical. They cater for a situation that does not reflect the modern world. There's lots of flawed laws out there and I can see why people get annoyed at bureaucracy, annoyed at the overhead of trying to meet these requirements. But on the whole, I think it's a good thing. And I think when it came to AI, there's just no doubt that it needs laws and standards around it. It's one of the most powerful technologies we've ever developed.
To make sure that we get the most out of that as humanity, we need to have the right things in place to set us up for success. And if you think of like, whatever your favourite game is or sport or anything like that, like chess is amazing because it has some pretty complicated rules that make it really good. Same with your favourite sports. So rules on the whole are not necessarily anything to fear or anything to be afraid of, but we have to make sure that we get the rules right.
Gareth Workman
Yeah.
Ryan Donnelly
And we can certainly debate the merits of some parts of the EU Act and some of the other regulations around the world. But I think the general direction of travel and the principles there are pretty sound.
Gareth Workman
I think you're right. I think because this is regulatory landscape, it is becoming a bit more fragmented because of this pace and everyone's trying to you know, resolve some of the challenges that we get the right outcomes and protect humanity and we set ourselves in that correct direction.
So, in terms of kind of those different rules and maybe regulations emerging across, the EU, the US and beyond.. How should companies navigate that sort of complexity, particularly those with a global footprint? What's the bit of advice you'd give to them on how they navigate those upcoming regulations?
Ryan Donnelly
Yeah, so with complex technology, naturally there's going to be a level of complexity to the rules that go around that. There just is. It's just a fact. And I think that is what we're seeing here in terms of like, you know, I think the rules and the laws around data, for example, like GDPR and CCPA are complicated. Don't get me wrong. But I think they're a lot more straightforward than the rules around AI and what they need to be. So it is a difficult area.
I'm not going to do the shameless plug, but of course software can help with this. And there are AI governance platforms out there that can help people navigate this sort of thing, of which we are just one. But I think there's ways to cut through this complexity and make it easy to follow and easy to understand what the obligations are. And at the end of this company, I know I've done my job well if we can build a product that makes it so easy and makes it a no brainer basically for people to do it. That why wouldn't you just go through the compliance journey? Why wouldn't you make sure that all of your AI, it's an easy thing to do and you can do it at scale with software. And then, the goal out of that then is actually just to get the most out of these technologies.
I mean, it's sometimes the space of AI governance, AI safety generally gets a bad rap, which I don't think is entirely undeserved, to be honest, at times, because there's a lot of, sort of, doomsday element to this and, you know, over caution in a way. But I'm an optimist, right? I want to see these technologies flourish at scale, at speed. I really want to see that happen. And I want to be a part of making sure it happens. But I know that direction that we talked about earlier is essential. And I think as part of that direction, go through that compliance training and comply with the laws, the standards in the space, get the most out of the technology on the other side.
Gareth Workman
Yeah. And as you say, you know, with that, you'll ask yourselves kind of the challenging questions if you're doing the right thing, that to hopefully avoid those doomsday scenarios that you chatted about there. This is a difficult question, but you know, as AI systems become more autonomous and start taking action. Who ultimately owns their decisions? Is it the companies, is the developers, is it a) another person? What do you think or what would, what would legislation say?
Ryan Donnelly
Yeah, so there's one of those I can discount real quick. So, I definitely do not think the machines own their own responsibility at this stage. I think we can trace all of it back to humans. So I feel pretty confident in saying that. But now, when you start looking at the humans involved, attributing the liability across that value chain is probably one of the hardest risk management slash regulatory questions in this space today. And I definitely do not think that we have all the answers today. There's so many different permutations and different elements of context that can change the analysis there.
Gareth Workman
In terms of helping someone on that journey, what sort of things should they do if that challenge ever arises? What are the good things they should do to be able to, at least with confidence, say they've tried to do the right things or whatnot? What sort of advice would you give them?
Ryan Donnelly
Well, so is this in the context of like an enterprise, some sort of organisation?
Gareth Workman
Yeah. So, people that are maybe leaning more heavily in that sort of AI-driven automation, whether that's, you know, manufacturing or retail. What are the bits of advice you would give them as leaders running those businesses to make sure that they're doing the right thing, especially when they start to use more autonomous AI and engaging with their customers or citizens or whatever it might be.
Ryan Donnelly
Yeah, it's actually a couple of really simple practical tips, to be honest. Probably easy said, a little bit more difficult done. But it's the simple stuff. Keep an inventory of all the different AI systems that you're using across your organisation. Subject them to risk assessments. To be honest, don't worry about getting that perfect at the start. Just put them through some sort of process to identify any kind of risks. And once you've identified the risks, then you know what you're working with. And you can take steps to mitigate them.
Doing those three simple things at the start can save a tremendous amount of headache. And now, your program will evolve from that eventually. And you'll have full sort of comprehensive processes to figure out, are you a provider? An employer? An authorised representative? A downstream provider? These are all general legal terms under the EU AI Act. You'll get there and you'll evolve to that process. But start somewhere and start today. Because these technologies are not going away. They're definitely not going back in a box.
They will, and they already are, having a tremendous impact on how we do business around the world, how we live our lives. Make sure that you're at the forefront of this. We live in a time of like just exponential growth. And if you sit still for any length of time at all, you get left behind real quick. So, doing nothing is not an option when it comes to these tools. but you don't have to do everything all at once when it comes to the full risk management process and the full governance process around this. Just get started.
Gareth Workman
Yeah, I think that's the bit as you say, sometimes when you look at something it looks like you're facing like a cliff that just looks unclimbable, it's almost this kind of paralysis to do anything. And I think as you say, start small, start building your inventory and move from there. You know, momentum will build, but don't just try and do everything one fell swoop.
Ryan Donnelly
Yeah, it's like any problem. Like just break it down into all of its constituent parts and figure out which is the easiest part to tackle first and get going. That'll have some impact here and then start doing it. And there's a load of trial and error along that journey, to be honest. And there's a real organic growth journey, I think, organisations have to go on with this. Where, as they get more and more sophisticated, they get faster at doing it. They understand the risks a little bit better and can apply additional controls for additional circumstances and manage the risks in a really comprehensive and complete way along that journey. But you've just got to get started. As you say, it's an area now where it's moving so quickly that, doing nothing is definitely just not an option. It's worth repeating.
Gareth Workman
Yeah. So, looking ahead, you know, so we'll not hold you to this Ryan - but what's, the kind of, the one radical shift you'll maybe see in the future of the oversight of AI that you expect maybe in the next five years, what do think might change in that space?
Ryan Donnelly
I think in that space, when it comes to the oversight of AI, it's actually probably reasonably clear here now because we kind of have a bit of a path ahead of us under the sort of regulatory initiative set out in the EU AI Act. So, things like having all the different notified bodies in all of the member states here now to regulate this. Things like certain sandbox environments where, you know, these systems can be brought to and tested without fear of sort of regulatory penalties as a result. I think we're going to see all that infrastructure.
Gareth Workman
Yeah, that sort of safe space to really test and challenge these things in a real scenario as opposed to kind of hiding it away.
Ryan Donnelly
Exactly. And the Office of AI, you know, it's already up and running in the European Union as well. So I think ahead of us, you're going to see more oversight without a doubt. And you're going to see that sort of in Europe. But I think you're also probably going to see that more in the UK as well with additional oversight. There's definitely elements of that already happening in the UK with the Security Institute. You're seeing more of it in the US as well at different state levels. There are some sort of broad horizontal laws coming in at the state level in the US today.
There's also industry-specific laws already enforced today in the United States around managing AI and specific verticals. So, you will see definitely increased oversight and increased management around these tools. And I think there is probably, if I got a crystal ball out, there will be some anecdotes and tweets that come from this where it will look like, you know, it'd be easy to say, hey, that's harming innovation here or there or this kind of thing. But I actually think the overall direction of travel is going to be a net positive because, as I said, it's hopefully going to travel in the right direction.
So, I'm an optimist there and I think there will be increased oversight. We can see the structures that it's going to take. And I am very optimistic it's going to be a net positive.
Gareth Workman
Absolutely fabulous, Ryan. One of the things at Beyond Boundaries is we don't just explore the future of AI, we look at equipping business leaders with insights they need to act now. So, if a senior leader of a business approached you and asked for one crucial tip around AI governance, what would you tell them? And it may be a repeat of the message before, but what would you say to that person?
Ryan Donnelly
Well, I would ask them a few questions, first of all, just to find out where they're at in that AI journey. I would figure out, you know, if you're someone who's just getting started in your AI governance journey, right? Or just your AI journey generally, and you're trying to do it, I would say put some framework in place to make sure that you get the most out of this, right? Make sure that you're managing those risks. Do those three easy things that I said, that inventory, that assessment, mitigate the risks.
If you're a little bit further along that journey and you've already kind of got AI in place, what I would probably say to you in that circumstance is take stock a little bit. Have you got that policy in place around your AI? Okay, have you got a framework to manage all this stuff? Do you know where AI is being used in your business? If not, take steps to address this.
And if anyone along that scale ever falls into the zone of what I call analysis paralysis, I would, I would actually pull them aside and sit them down in a side room for a real talking to, you know, because I would say: Hey, this space is moving tremendously quickly and it's not going to wait while you do your analysis. You have to get started. Do something, anything, quite frankly. Take some of those basic steps. By the time you finished your deliberations in terms of who should be responsible for this, or what, or that, or I can't do this because it's moving too fast, you'll be left behind. You really will. Inaction is really just not an option. So there's kind of three different scenarios and the advice I'd give.
Gareth Workman
Absolutely magic, Ryan. So like, we've covered a lot today. We've covered, rapid evolution, risks that leaders can't ignore, shifts in regulatory landscape. So if there's one takeaway you want business leaders to remember, and they just listen to this one bit, what's the one thing?
Ryan Donnelly
Get started. And accept that it's an organic journey. It's a growth journey that you're going to go on here. So you'll get started today. I would, start with an inventory, start with some risk assessments. Be ready for that to grow over time and for you to grow your program, adopt new things as you go and expand it over time.
But, if I could leave a final message, it would probably just be like, it is exciting, okay? Make the most of this. It is a really unique opportunity that we're living through here in terms of the potential of this. It's only going to get more and more exciting by the day. So make sure and keep that optimism, but just put the right guardrails in place to make sure that you're getting the most of it.
Gareth Workman
Fabulous, Ryan - some inspirational calls to action.
So, that brings us to the end of this episode of Beyond Boundaries. Thank you for sharing your insights. You know, it's been clear that leading in AI today means balancing innovation with responsibility. So, you've given our listeners a lot to reflect on. Thank you, Ryan.
Ryan Donnelly
It was a pleasure. Thank you very much for having me.
Gareth Workman
At Kainos, we believe that leading in AI isn't just about embracing new technology. It's about making smart, responsible choices today that shape a human centric, future ready business for tomorrow. If you enjoyed this conversation, subscribe wherever you get your podcasts and head to kainos.com/beyondboundaries for more insights. Join us next time as we continue to unpack the big question shaping AI and business. Until then, stay curious, stay innovative and let's go Beyond Boundaries together!
End of episode