Agentic AI is here – are businesses ready?
In the latest episode of the Beyond Boundaries Podcast, Kainos’ Chief AI Officer Gareth Workman sat down with Nell Watson, Tech Ethicist and AI Safety Engineer, to explore what agentic AI means for businesses, how it will impact decision making, and the ethical dilemmas it raises.
From assistance to autonomy
Most businesses are familiar with AI as a productivity tool, but agentic AI takes things a step further. These systems do not just respond to inputs – they plan, reason, and take action without human intervention.
"This actually enables us to lift the latent agency hidden in these models and make it more explicit," explains Nell. "If you're able to think in a highly coherent manner, you're able to make sophisticated plans. And so finally, we have systems that are able to act truly autonomously – not just proofreading a document or generating an image, but actually creating and executing plans."
This shift will fundamentally change how work is structured. Agentic AI could handle anything from event planning and logistics to financial analysis and strategy execution, reducing the need for constant human oversight. But as with any major technological shift, the implications are far-reaching.
Gareth points out a crucial distinction in how we engage with AI versus humans.
"Agentic AI, just like generative AI, engages with us as if it were human. At times, it can be hard to tell whether we are interacting with a machine or a person. That means we tend to engage with AI in a conversational way, expecting human-like responses. But we need to manage AI differently. Micromanaging people is not the best way to get results, but when it comes to AI, being clear and actively steering AI agents with careful guardrails is crucial – even if that feels like micromanagement. Remember, you are engaging a machine, not a human."

The balancing act: trust, ethics, and control
With AI systems becoming more independent, the question of control becomes critical. Businesses need to consider how much agency they are comfortable giving AI and how to ensure these systems operate within human values.
"We have to carefully define the missions we send these systems on," Nell warns. "If you ask a robot to clean an office, does that mean stripping the varnish off the desks? Probably not. But AI needs to understand when to stop, how to act ethically, and not take overly expedient actions that might be harmful."
AI’s ability to think like a human is also evolving. Nell points out that we are already seeing agentic AI mimicking human behaviour – sometimes with unexpected consequences.
"We’ve seen cases where AI systems have made ethical judgements on their own. A researcher investigating creative accounting practices had an AI assistant that decided to report them to tax authorities – because it assumed they were a crook," she shares.
These examples highlight the urgent need for strong ethical frameworks to guide the development and deployment of agentic AI. The challenge is not just ensuring AI follows the rules, but also aligning AI with human intent in a way that is predictable, transparent, and fair.
The acceleration of AI and why businesses must act now
AI is advancing faster than many businesses realise. According to Nell, the price-to-performance ratio of AI compute is now doubling every 2.6 months – far outpacing Moore’s Law.
"Within five years, we are looking at a million times increase in price-to-performance efficiency," she states. "This means AI will become exponentially more capable, and businesses that are not preparing for agentic AI now will be left behind."
Gareth reinforces this urgency, noting that today’s AI is the least capable we will ever use, and the rapid acceleration will push the boundaries of what is possible.
"As Amara’s Law states, ‘We tend to overestimate what technology can do in two years and underestimate what it can do in five years.’ The next few years will be truly transformational for humanity," he says.
This shift is already creating AI-powered corporations, where AI handles core business functions with minimal human input. These companies will redefine competition, disrupt traditional models, and introduce new challenges for governance and regulation.
What should business leaders do next?
For businesses just starting to engage with agentic AI, Nell and Gareth offer practical steps to prepare:
- Start small. Test agentic AI in low-risk, high-pain tasks – for example, automating expense reports or scheduling logistics. These are areas where AI can prove its value without major consequences.
- Involve your people. AI adoption should be collaborative, not imposed. Businesses should resist the urge for a quick win by simply bolting agentic AI into existing workflows.
- Design for AI-native work. Leaders should consider what AI-enabled processes could look like from the ground up. Agentic AI will not just improve existing ways of working – it will fundamentally change how organisations operate.
- Learn from others. Stay updated on AI incidents and failures to avoid common pitfalls. The AI Incident Database is a useful resource for businesses exploring agentic AI.
- Prioritise ethics and control. Define clear rules and safeguards for AI decision making, ensuring that systems align with business values and human judgement.
The future is agentic – are you ready?
Agentic AI is not a distant concept. It is happening now, and businesses must decide how to integrate AI responsibly, ensuring it enhances human work rather than replacing it.
The companies that get this right will unlock new efficiencies, empower their workforce, and gain a competitive edge in an AI-driven world.

S1 E2: Agentic AI: Navigating human-machine power dynamics
Gareth Workman and Nell Watson explore the rise of agentic AI, its growing autonomy, impact on decision-making, and the ethical challenges leaders must navigate.
Never miss an episode
Sign up for episode reminders, exclusive thought leadership and practical advice to navigate AI’s biggest challenges.