Beyond Boundaries S1 E5 - Knowing the unknowns: How leaders can stay ahead of AI security risks

Date posted
27 June 2025
Watch time
20 minutes

Knowing the unknowns: How leaders can stay ahead of AI security risks

In this episode of Beyond Boundaries, Gareth Workman, Kainos' Chief AI Officer, is joined by John Sotiropoulos, Head of AI Security at Kainos, and Hyrum Anderson, Director of AI & Security at Cisco, to explore one of the most urgent and misunderstood challenges in AI today: adversarial risk.

As AI becomes more embedded in business, it also becomes a target. But this isn’t about fear - it’s about readiness. Drawing on experience from OWASP, NIST, Microsoft and Cisco, they unpack what leaders need to know - from securing AI by design and aligning innovation with resilience, to building cultures of trust and accountability.

So what does a secure, resilient AI ecosystem really look like? How can security become a driver of innovation, not a barrier? And what should leaders be doing now to stay ahead of adversarial risk?

Tune in for practical insights and strategic perspective to help you lead with confidence.

The full episode transcript is available here.

Watch here:

Listen here:

Available on your favourite platforms:

Sign up for episode reminders and exclusive thought leadership

Sign up for monthly episode reminders, bonus content, and thought leadership straight to your inbox.

Sign up for episode reminders and exclusive thought leadership

Sign up for monthly episode reminders, bonus content, and thought leadership straight to your inbox.

Transcript

Teaser

Hyrum Anderson
It is a technical problem, but it's also a strategic problem, meaning that there are nuances about AI that are a little different than traditional software.

What leaders need to realise is that the framework doesn't need to change, it's only sort of execution of specific tasks and it's not that hard to become proficient in that. Don't run from what you know, lean into it, pick up a few new tricks along the way, and you're going to be good to go.

Start of episode

Gareth Workman
Welcome to Beyond Boundaries, the podcast from Kainos that helps business leaders navigate the fast evolving world of AI. I'm Gareth Workman, Chief AI Officer at Kainos. And today we're focusing on the challenge of keeping AI secure. So as AI becomes more embedded in our business operations and decision-making, the question is shifting from we build it to how we protect it. And we're talking about adversarial AI, the risk of new systems being misled, manipulated or exploited. But this isn't about fear.

It's about being ready. So joining me today are two world leading experts who bring extraordinary depth to the conversation. So first, I have John Sotiropoulos, who is Head of AI Security, whose global experience spans DSIT, NIST and OWASP, from shaping international standards to securing national scale systems. John, welcome, it's great to have you here.

John Sotiropoulos 
Great to be here, Gareth.

Gareth Workman 
I'm also delighted to be joined by Hyrum Anderson, Director of AI and Security at Cisco, whose career spans research and leadership roles at MIT Lincoln Lab, Microsoft, and Robust Intelligence. A pioneer in adversarial AI and red teaming, Hyrum has advised both US and UK governments on AI safety. So Hyrum, thank you for being here today and welcome to Beyond Boundaries.

Hyrum Anderson 
It's great to be with you, Gareth and John.

Gareth Workman 
Thanking you. So, let's dive right in because there's a lot to explore in today's conversation. So to start us off, John, I'm going to come to you first. So many leaders are just starting to hear the term adversarial AI. So could you explain it in kind of simple terms and why it's relevant in the boardroom level today?

John Sotiropoulos 
If you think about adversarial AI, it's effectively manipulating intelligent systems to fool themselves to make the wrong decisions. Think about like the social engineering of AI, where you lead the system to do things that you didn't expect it to do, whether it's bypass decisions, extract data, get the wrong decisions. That's the type of thing that we see with adversarial AI. And Hyrum has also written a great book on adversarial AI, so maybe Hyrum can add to that.

Gareth Workman 
Yeah, Hyrum, it would be great for your perspective as well.

Hyrum Anderson 
Yeah, you know, it used to be that adversarial AI was relegated to academia, an esoteric study of what you could do algorithmically to make a machine learning system go wrong. And I think two major things have happened that make this real makes this relevant to everybody. Number one is that software is changed by AI. You know, AI is software today and every software system is adapting that.

And second, one learning that we've had over recent years with, you know, ChatGPT jailbreaks and whatever, is that it doesn't take an algorithm to make these systems at Runtime come down. So it's no longer just the realm of academia. But the purpose is the same, is that when there is an adversary present, software systems built on AI can behave unexpectedly to their advantage.

Gareth Workman 
Yeah. And as you say, that's the thread we're going to pull today and give a little more insights. So John, you've worked with OWASP, NIST, DSIT, you know, around this space. So what trends are you seeing emerge around threats and what should business leaders be thinking about in terms of their response?

John Sotiropoulos 
I think Hyrum started that thread there. It used to be, and we still see dozens of academic papers talking about specific adversarial AIs that's very narrowly defined. But then generative AI, with a natural language interface makes it really, really easy to manipulate that. And also agentic AI begins to have consequences beyond what we expected. From data exfiltration, from trust manipulation, we have seen examples where a very simple, at OWASP, a very simple co-pilot, because it's trusted so much, people can use email to create what we call indirect prompt injections, in other words, put things there that are interpreted as system commands and exfiltrate data. So we're beginning to see with agentic AI and with generative AI, something that used to be academic now becoming part of the mainstream in creating new attack surfaces.

Gareth Workman 
Cool. Hyrum, in terms of your experience, you've seen lots of different industries. Do you see how these threats are being addressed? Can you see some of these things happening or other examples?

Hyrum Anderson 
Yeah, do you know, actually, gratefully, security leaders already have kind of the tools, most of the tools necessary to tackle this problem. As John and I have been mentioning, fundamentally, these are software systems and the kinds of risk mitigations and controls are still appropriate with a few wrinkles, right? With a few things that you need to sort of upgrade. I've been really gratified to see forward leaning organisations who have sort of active risk management programs lean in to how they're adopting AI. And this becomes something that I think will be a strategic advantage to them, meaning that they're not gonna let AI fear slow them down. They're gonna mitigate, they're going to control the risk and they're going to push forward with their AI strategies in a bold way. And I think that's going be a strategic advantage for them.

Gareth Workman 
That's fabulous. You've hit something else I'd love to hear your thoughts on Hyrum. And I'm going to apologise for this first, because sometimes security can often be labelled unfairly as that, sort of, blocker to progress. From your perspective, how can building that resilience in adversarial threats become that kind of strategic or competitive advantage? How do you see that kind of playing out?

Hyrum Anderson 
Yeah. The enduring advantages of AI will come from those that are safe and secure. Ultimately, you see this actually, this is the long game and you've seen this happen historically. Think about, you know, what Apple considers its competitive advantage with privacy or now at Microsoft Azure, how they market security as a competitive advantage. This will be the competitive advantage. In the near term, there definitely needs to be controls in place. You can't just let everybody run to Hugging Face and download whatever and start, right? There's both application security considerations as well as enterprise security, but those can mostly be addressed with your existing kind of risk management frameworks and plans.

Gareth Workman 
Yeah, but it's really interesting as you say, like lots of organisations now are really majoring on how much they're doing in the security of their platforms, to be that advantage, you know building and generating that trust, you know, things that people assumed are implied, but actually they're going out of their way to illustrate how much is going in here.

Hyrum Anderson 
Yeah. Can I give you like, a couple of cool examples? 

Gareth Workman 
Of course you can.

Hyrum Anderson
We're working with a company who decided early on that for the first few weeks of them getting the AI craze, they had to block all of Hugging Face. But then they figured out controls where they could let and allow their developers to really download and experiment with these models in a way that was safe for them. So before this whole idea became a fire in CISO's minds, they were there. And they were exploring that risk and preparing so that they could have a head start in that space. Another simple example is at Runtime and how people are allowing their developers to use service platforms to scale their AI. And just defining what the risk is, understanding and accepting in many cases what that risk is so that they can allow their developers to really run fast with AI.

John Sotiropoulos 
And I think you know that I want us to stay a little bit on that risk element because I think actually how cyber security becomes a competitive advantage and competitive edge is when it stops being all or nothing. I think, Hyrum, with the examples you've given, that all or nothing, either you run away from AI, which is a vulnerability, or you block everything, which actually doesn't progress anything. By understanding the risks, you and we always recommend, and we're seeing organisations here in the UK, and that's the DSIT guidelines using threat modeling, a technique that we highly recommend. You don't just accelerate security, you actually understand what your system does. You become in control of the narrative and that really for me is what it makes resilience not a brake but it makes an accelerator for innovation and that's why security can help you.

Gareth Workman 
Very cool, John. just pulling a bit more on that, kind of, thread of vendors that are kind of really pushing and illustrating what they're doing. So John, what sort of things should leaders be asking of their AI vendors to ensure their systems are secure by design? What sort of things would you do to lean into that space?

John Sotiropoulos 
I think they should stop asking, is it secure? That's not a question they should be asking because the answer will always be yeah. I think they should evaluate how they're going to use the system. And they should, again, I'll come back to threat model, maybe a lightweight version of that, understand how that fits in there, what are the threats and risks, and then ask the vendor how the vendor actually mitigates those risks. Are they logging the data when they use a model? Are they secure by design? Those are the things I would be asking.

Gareth Workman 
Very cool. Hyrum, is there anything you'd want to add? Are there things you would be people to say or speak to their vendors or their product suppliers about?

Hyrum Anderson 
I think where we see people who are adopting AI finding the most things to be concerned about and control have to do with kind of bleeding edge adoption of agentic systems when third party systems are involved. And it's mostly about, as John mentioned, sort of like where is the data going for data loss prevention types of concerns. And it's also a lot about identity. What sorts of privileges is my agentic system I'm developing inheriting from a user or inheriting from the developer? And it should always be from the user, of course, not the developer.

Gareth Workman 
Cool. So one of the things, Hyrum, I'm going to switch a little bit here. So adversarial AI often gets framed as a technical issue, but it's clear there's a leadership dimension here. So what sort of mind shift do you think leaders need to make to embed trust and security into their broader AI strategy, as it were?

Hyrum Anderson 
Yeah, I mean, It is a technical problem, but it's also a strategic problem, meaning there are nuances about AI that are a little different than traditional software. So examples of that is, you know, if I can patch a CVE, it's really hard to patch an open source model, right? So there are other ways to do those kinds of things, but what leaders need to realise is that the framework doesn't need to change. It's only sort of execution of specific tasks. And it's not that hard to become proficient in that. find the right tooling that controls for those risks. So my advice would be Don't run from what you know, lean into it, pick up a few new tricks along the way, and you're gonna be good to go.

Gareth Workman 
Fabulous. John, in terms of kind of that leadership maturity bit, what are some of the gaps, the bright spots that stand out to you? What do you think, you know, are the obvious things they're maybe missing?

John Sotiropoulos
I think understanding is the biggest missing piece at the moment. And I think treating security as something that's out there instead of integrating it as a culture is the biggest risk that I see. And you see all kind of AI security influences out there, publishing zillions and zillions of vulnerabilities and threats that may never happen to you. Kind of takes you a little bit away from what you need to do. And what you need to do is what Hyrum was referring to. Understand the risk, do an evaluation of what you have, the tools, and uplift your cybersecurity to be able to mitigate. For sure, you need some new techniques there. I think a bigger change that as an organisation you face is you used to worry about uptime all the time. With adversarial AI, yes, it's technical, but what it does is trust becomes the new uptime. And you need to address that as part of your thinking and your cybersecurity.

Hyrum Anderson 
Can I just jump in and like, John's too humble to toot his own horn, but all of the listeners need to look at the work that John has been doing with OWASP in particular, about how to secure AI and agentic AI systems. That work is gonna be a bellwether for people to compare to for years to come. So I'd recommend that you take a look at that.

John Sotiropoulos 
Thank you, Hyrum - and you're part of that work too.

Gareth Workman 
Well said, Hyrum. And maybe that, you know, we talked about businesses that maybe haven't lent in heavily, but are maybe early in their journey, Hyrum. Beyond kind of what you're sharing, obviously, the great stuff with John, but what other small steps do you think they could take today to kind of reduce risk or build trust in systems and data? You know, those taking those tentative steps and they don't want to end up in fear or stall. What sort of things do think they should be doing?

Hyrum Anderson 
Yeah, for people who are still dipping their toe in the water, that's good to internally dip your toe before you sort of maybe release a public AI product. While you're doing that, now is the time to invest. Threat modeling, another one I'd add is when we say the word red teaming, it can sound so sophisticated. Actually, you just need to put on the role of an angry user and see what can go wrong to understand. Like really it's not that sophisticated what you need to do. A red teaming exercise stems from a methodology that you will have and the findings you have are not magic. They come from you trying to be just conscientious about this methodology. A one week, a one man week investment is going to surprise you about the kinds of things that you find that will help you to measure and manage the risk before you decide to go public with your product.

Gareth Workman 
But it's amazing as you say, sometimes that things that do not appear in the lab just miraculously appear in the real world and that's just eye opening.

Hyrum Anderson 
And especially with a large language model that can respond differently to small changes in the input. The unit test never will bring it up, right?

John Sotiropoulos 
Exactly.

Gareth Workman 
Yeah, that's so subtle. John, I'm going to ask you a slightly different thing. So again, in terms of kind of security and innovation teams, how do you get them working more closely together so that you avoid that friction so that you can move fast but also stay secure?

John Sotiropoulos 
I think this is the culture shift that we need to make for a couple of things need to happen. We need to be more risk-based rather than either or. And we covered that with Hyrum earlier on. And the other one is develop that secure by design from the beginning so that we accelerate innovation. But I think we also have to face realities. And the reality is that we have outstretched cyber teams. They have new legislation coming in, the Cyber Resilience Bill. And as once someone said, I'm drowning and you're throwing the water, showing the water. So I think there are opportunities for innovation in AI to help, help teams accelerate monitoring, accelerate, you know, this is where tools can help to the innovation. But really is putting instead of making a checkbox, making it a culture that becomes part of building systems. And either you start for new things with modeling, secure by design, or people will do that. If you have something out there, the red teaming suggestion Hyrum was making is a must.

Gareth Workman 
Perfect. So one of things we always do, try to do at Beyond Boundaries is help leaders not just understand what's happening AI, but you know what to do next. I'm going to ask you both the question that gets to the heart of where this is all heading. So Hyrum, I'm going to come to you first. So when you imagine that kind of the future secure resilient AI, what does it look like? And equally, what can leaders be doing today to help shape that future?

Hyrum Anderson 
Yeah, first I would call it a future of secure and resilient software. AI is a component that's maybe not even integral, but pervasive in it with systems cooperating with each other. And what it means to be secure, again, I'm going to refer to OWASP. You'll see a great roadmap in that work. You're going to see that there are controls in place to handle identity, least privileged principles that are always a part of the software systems that still are sort of being adopted by agent systems. The future is one in which you can ask an agent powered by AI to do something and it will have the appropriate permissions to access the appropriate systems and no more, and can take autonomous action only with delegated authority from the user and according to my permission level, right? And those results can then be reviewed before potentially actually taking action.

Gareth Workman 
Yep. But as you say it's just that, you know, those guardrails to make sure it's a controlled process. In the same way we've done with many years of kind of access control. It's that it's least privileged and appropriate.

Hyrum Anderson 
I'll also say that like so much, of course, the future is shaped by the present. I think we're going to look back in five years and think of today with AI, sort of like how we're all thinking of a 2400 K-BOD modem dial-up. Like in terms of speed and sophistication and security, all those things, like it's here and we have the connection, but it's actually still in its infancy. There's so much more to come.

Gareth Workman 
Perfect. Excellent. Thank you for that, Hyrum. And John, over to you. So from your vantage point, what do you see rising to the top of every leader's agenda if they're serious about building secure AI?

John Sotiropoulos 
Ask your cyber security team, are you ready to handle AI? And have that conversation. Not as a, again, not as a checklist, not as a demand, but really creating a culture where the cyber team understands those threats. Because as Hyrum said, we have secured applications before. This isn't any different. We saw it in the cloud too. We all said, oh my goodness, the cloud, how are we going to secure it? We did. We just upgraded, uplifted cyber. Unless we talk to cyber and unless we make it part of that conversation, I think there will be silos and this is where the next breach is going to happen.

Gareth Workman 
Perfect. Perfect. So look, thank you both. We've covered a lot of ground today - you know, understanding adversarial AI, exploring how leaders can embed security, and trust into their AI strategies. But before we close out, I kind of want to bring it back to mindset because how leaders think about AI and security is just as important as what they do. So I've got one final question for both of you and it's going to be the same question. So John, I'm going to lean into yourself first. So if you could leave a business leader with one mindset shift about AI and security, what would it be?

John Sotiropoulos 
I will repeat myself. Don't treat AI security as a checkbox. Treat it as a culture, make sure that cyber is part of the conversation of what AI does for you.

Gareth Workman 
Clear and concise, lovely. Hyrum - yourself, what's your one-liner?

Hyrum Anderson 
Yeah, a similar take is that don't treat AI as a distinct piece of software. The future is that it is software. AI is changing software. And so it's in some ways changing security. But keep your fundamentals in your risk mindset.

Gareth Workman 
Fantastic, both. Thank you very much. Lots for our listeners to reflect on. So that brings us to the end of the episode of Beyond Boundaries. So John, Hyrum, thank you both for such a powerful and thought provoking conversation. You know, it's clear that building secure and resilient AI isn't just a technical challenge, but a leadership one. Any other final thoughts you want to leave?

Hyrum Anderson 
There's a ton of resources there. OWASP, check NIST, there's a lot of work coming out of the UK and AI institutes on AI security. So there's a lot of reference material to look forward to to learn more.

John Sotiropoulos
There is momentum, use it, and I'm sure in the years to come, we'll look back and say, why did we ever worry? That would be security as business as usual.

Gareth Workman 
Wise words, John. So for our listeners, you know, the final message is clear, you know, security isn't just something you bolt on later. It’s a mindset to adopt early, a responsibility to lead with, and, ultimately, a catalyst for building trust in your AI future. So thank you for listening to us today.

Hyrum Anderson 
Thanks, Gareth.

Gareth Workman 
At Kainos, we're committed to helping you harness the power of AI responsibly while navigating the evolving landscape of trust, resilience, and ethical innovation. If you found today's conversation valuable, don't forget to subscribe wherever you get your podcasts. And for more insights, visit kainos.com/beyondboundaries. We'll be back next time to explore another big question at the intersection of AI and business. Until then, stay curious, stay innovative, and let's go Beyond Boundaries together.

End of episode