Knowing the unknowns: how leaders can stay ahead of AI security risks
As AI becomes more deeply embedded in how organisations operate and make decisions, a new frontier of risk is emerging. Adversarial AI, where intelligent systems are manipulated, misled or exploited, is no longer just the domain of academic research. It’s here, and it’s growing.
In episode five of the Beyond Boundaries Podcast, Gareth Workman, Chief AI Officer at Kainos, is joined by two global experts on AI security: John Sotiropoulos, Head of AI Security at Kainos, and Hyrum Anderson, Director of AI and Security at Cisco. Together, they unpack how adversarial threats are evolving and what business leaders need to do to stay resilient.
From academia to boardroom priority
Once a niche concern for researchers, adversarial AI is now a real-world risk that affects everyone. As Hyrum explains, “AI is software today, and every software system is starting to integrate it. One thing we’ve learned – through things like ChatGPT jailbreaks – is that it doesn’t take complex code to bring these systems down at runtime. When there’s an adversary present, software systems built on AI can behave in unexpected ways that work to the attacker’s advantage.”
John adds that the shift to generative and agentic AI is compounding the risk. “We’re beginning to see something that used to be academic now becoming part of the mainstream in creating new attack surfaces,” he warns, citing real-world examples of prompt injections and data exfiltration. As the threat landscape evolves, adversarial AI is no longer just a technical issue - it’s a strategic priority that belongs on the leadership agenda.

Security isn’t a blocker - it’s a competitive advantage
Despite the risk, both experts emphasise that AI security should be seen not as a brake, but as a business enabler.
Hyrum argues that “the enduring advantages of AI will come from those that are safe and secure.” He points to major tech platforms already positioning security as part of their competitive edge - something that will only grow in importance as trust becomes central to technology adoption. John agrees: “Trust becomes the new uptime,” he explains. “Security isn’t about blocking progress, it’s about accelerating it when you embed it into the way you build and innovate.”
What leaders should ask (and stop asking)
One of the most important mindset shifts leaders can make? Stop asking vendors if something is simply “secure.”
“The answer will always be yes,” John says. Instead, leaders should be threat-modelling their use cases, understanding the risks, and asking vendors how they mitigate those specific concerns - from data logging to secure design.
Hyrum adds that red teaming - once seen as overly technical - is something any organisation can begin with. “Put on the role of an angry user and see what can go wrong,” he says. “It doesn’t have to be sophisticated. A one-week investment will surprise you with what you learn.”
Building secure AI starts with leadership
The episode closes with a focus on culture, mindset and collaboration.
“Don’t treat AI security as a checkbox,” John says. “Treat it as a culture. Make sure your cyber team is part of the conversation about what AI does for your organisation.”
Hyrum echoes the call for integration. “Don’t treat AI as a distinct piece of software. AI is changing software - and in some ways, it’s changing security. But the fundamentals still matter. Keep your risk mindset.”
For leaders navigating AI adoption, it’s clear that preparedness, not perfection, is the goal. And those who act early, ask the right questions, and embed trust from the start will be better positioned to innovate with confidence.
What leaders should do now
- Stop asking, “is it secure?” and start evaluating how systems are used, where risks emerge, and how they’re mitigated.
- Adopt threat modelling to build understanding of where your systems are most exposed.
- Use red teaming to pressure test your AI - even a lightweight effort can surface serious insights.
- Ensure security and innovation teams are aligned - move from compliance checklists to shared ownership.
- Recognise trust as the new uptime - resilience and transparency are now key to long-term adoption.
- Treat AI security as a strategic priority - not just a technical challenge, but a core part of leading AI responsibly.

S1 E5: Knowing the unknowns: How leaders can stay ahead of AI security risks
Gareth Workman, John Sotiropoulos and Hyrum Anderson explore how leaders can stay ahead of adversarial AI - by designing secure systems and building cultures of trust, resilience and readiness.
Never miss an episode
Sign up for episode reminders, exclusive thought leadership and practical advice to navigate AI’s biggest challenges.