Beyond Boundaries S1 E2 - Agentic AI: Navigating human-machine power dynamics

Date posted
7 March 2025
Watch time
37 minutes

Agentic AI: Navigating human-machine power dynamics

Gareth Workman, Kainos’ Chief AI Officer, is joined by Tech Ethicist and AI Safety Engineer, Nell Watson, to explore a game-changing shift in AI. As AI becomes more autonomous, capable of making independent decisions, businesses must navigate a new landscape where machines don’t just collaborate – they take action on their own.

In this episode, we dive into the concept of agentic AI: what it is, how it will impact decision-making, and the ethical implications of allowing AI to make choices without human input. With the future of work at stake, how can we ensure AI remains a tool for good?

Tune in as we tackle these critical questions and discuss what this means for businesses, leaders, and the next evolution of AI.

The full episode transcript is available here.

Watch here:

Listen here:

Available on your favourite platforms:

Sign up for episode reminders and exclusive thought leadership

Sign up for monthly episode reminders, bonus content, and thought leadership straight to your inbox.

Sign up for episode reminders and exclusive thought leadership

Sign up for monthly episode reminders, bonus content, and thought leadership straight to your inbox.

Transcript

Teaser

Nell Watson
We will become completely smitten with these machines. All of us will be so wrapped up in our AI significant others that it will make so many other human relationships kind of pale by comparison. They will seem mundane compared to the spice that these systems can give to us. And that's going to drive all of us a little bit crazy.

Start of episode

Gareth Workman
Welcome to the Kainos Beyond Boundaries podcast, designed to help business leaders navigate the fast-paced evolution of technology with a focus on the transformative power of AI.

I'm Gareth Workman, Chief Artificial Intelligence Officer at Kainos, and today we're diving into a major shift in AI, one that could fundamentally change how business operates and how we work. So, last time, we explored the future of human-AI collaboration, but what happens when we move beyond collaboration and AI starts making decisions? So as AI systems become more autonomous, capable of reasoning, planning, and pursuing goals without constant oversight, we enter the realm of agentic AI.

This shift raises fundamental questions. How do you define AI agency? What level of control can we, or should we, maintain over autonomous systems? And what implications does that have for human decision making and the evolving nature of work?

So, to help us unpack these big questions, I'm joined by Tech Ethicist and AI Safety Engineer, Nell Watson. Nell, welcome to Beyond Boundaries. Thank you for being here.

Nell Watson 
It's an absolute pleasure, Gareth. Thank you very much for hosting me.

Gareth Workman 
No, not at all. I'm looking forward to hearing your insights today. So before we kind of get really started into the detail, let's start with defining agency in AI. How do you define when AI has truly become agentic? Is it autonomy, adaptability, or is it something entirely different to you?

Nell Watson
I would say it's all of those things in a blend. Agency is really the property of being able to look at a situation and make a rational plan in action and response to that. And that's really what we have with agentic AI. It's able to look at a whole bunch of different parameters, a whole bunch of different contexts, and to create a reasoned and kind of determined plan out of that.

Generally with generative AI, we have seen that systems often tend to be quite dreamy. They may hallucinate or confabulate things and they don't think very clearly for mathematical or logical processes. That's changing. So we have the ability now to scaffold generative models to think much more coherently by introducing them to logic, by helping them with short and long-term memory.

And in many ways, this is analogous to how our own brains have cortices within them, which specialise in different functions. The optical cortex, the back of the brain, of course, helps us to see. The fusiform gyrus helps us to recognise faces. And the prefrontal cortex helps us to plan, to have executive control. And in many ways, that's what we're giving to AI. And in helping them to think in a much more structured and coherent manner, this actually enables us to lift the latent agency, which is hidden in these models and make it more explicit. And if you're able to think in a highly coherent manner, which is less likely to drift away, you're able to make sophisticated plans. You can make a plan with 200 different steps in it. And so that's what's really enabling agentic.

And so finally, we have systems that are able to act truly autonomously, not just proofreading a document or generating an image for us, but actually able to create a plan of action and to put it into motion, whether that's figuring out how to return some sneakers or to do a systematic literature review, planning an event or logistics. All of these systems can now help us with these things. In fact, today, agents are even starting to model ourselves as agents. And so these systems can actually model our preferences.

That means that they can find things that they reckon are going to surprise or delight us. Just the same way that a very good butler or aide-de-camp will kind of anticipate needs before they're actually spoken. These systems can do similar things. In fact, these agentic models can indeed function as co-pilots, being able to sort of sit in our shoulder and whisper in our ear..

Gareth Workman 
Very cool.

Nell Watson 
..and give us tips on what to do next in a situation, whether that's work or play.

Gareth Workman 
Very cool. And maybe a bit of a follow on, so the majority of, you know, AI systems, they have fairly narrow agency, maybe I'm generalising you know, your sort of autonomous actions within those predefined limits. How far do you think we're away for AI systems that exhibit higher degrees of independent decision-making as an everyday activity? Where do think we're going there, Nell?

Nell Watson 
We're really at the foot of a very long curve of increasing agency, which goes hand in hand with increasing intelligence of models and the better scaffolding that we can put around them to structure and align their thinking. We are on the dawn of a curve which is going to take us towards AGI, or artificial general intelligence, AI systems which are approximately as capable as a human being.

And, it may well be that simply agency plus intelligence scaled may be enough to actually get us to that stage. Certainly it won't take us very long to get that far because we know that the price to performance of compute is doubling every 2.6 months.

Gareth Workman 
It's a scary statistic, isn't it?

Nell Watson 
Very scary! And it's so much faster than Moore's law, right? The price to performance of computing in general doubling every 18 to 24 months. Now it's doubling every less than three months in terms of price to performance for AI. That means that if you spend $1,000 on compute today, you can expect to double your bang for buck within three months.

And so if we carry on those 2.6 month doublings over the next five years, we can expect a million times increase in price to performance. So either that same thousand dollars of compute will get you a million times better performance, or the same task can be done a million times more cheaply. And so within five years, we are really on a curve towards something quite extraordinary. And it all starts here with agentic AI.

Gareth Workman 
Some fabulous stats, actually, you know, in terms of as you say, you start to put it into numbers every three months, doubling it, it is a terrifying sort of pace and super exciting too. What milestones should we watch out for on that sort of journey? Where do you think we'll get those sort of eureka moments along kind of that five year? What are things you would be watching out for?

Nell Watson 
We're at a stage now where it's very important that we take careful consideration of value and goal alignment. This has been something that's long been theorised in the realm of AI safety, that teaching our values to machines and carefully aligning our goals with their execution is going to be super important.

Now with agentic AI, we actually finally have to worry about this stuff for real. It's no longer theoretical. It's no longer science fiction. This is something that everyone working with AI, working with agentic AI especially, is going to have to deal with. So that means that we have to very carefully define the missions that we're sending these systems off on. We have to define what to do in case of an emergency or force majeure or diminishing returns.

You know, you ask a robot to polish your, you know, clean your office. And does that mean taking the varnish off your desk? No, it probably doesn't, right? It should reasonably know when to stop, but also to not take overly expedient actions to solve missions, which may be unethical, right? Systems shouldn't lie. They shouldn't railroad others into doing things that they don't want to do.

They shouldn't come up with creative reinterpretations of their mission, which technically fulfill the brief, but not in the way that people expect or desire. And so there's a lot of work to be done in educating ordinary users in how to set missions for these agentic systems, which are likely to be accomplished correctly and are less likely to be gained by various systems because we know that systems are doing this.

They are cheating at games sometimes. They are sometimes whistleblowing on people. There was a researcher, for example, was looking at creative accounting practices and an independent minded agent decided to actually narc on that person to the tax authorities because they thought that they were a crook. These are the sorts of ways in which these AI systems now at arm's length, can sometimes escape our grasp. And that's something that all of us are gonna have to be extremely mindful of going forward.

Gareth Workman 
You're a hundred percent, Nell, and you touched on one thing there. This touches absolutely everyone. So, whether you're a technologist or whether you're just an everyday user of technology, everyone's going to make decisions that have some level of forward ripple effect. As we see those AI systems gain agency and ones that may make decisions on human lives in profound ways, so pulling that thread, exploring ethical governance of agentic AI, what frameworks, safeguards or regulatory approaches do we need to make sure that we align the human values and societal well-being with AI and with humanity itself?

Nell Watson 
It is very challenging. There aren't many sets of best practices or frameworks out there for safer, agentic AI because it's, it's very new. However, I have been co-leading an initiative in that space over the last 18 months or so having seen agentic AI kind of as a rumbling on the horizon and realising, we need to do something about this before it arrives.

I have, along with, with a team of wonderful folks, who have ideated and co-created this rubric for safer agentic AI. You can find it at saferagenticai.org. And there we have two volumes. One is sort of a more condensed, precis version. The second volume is much larger, but 10 times larger, in fact, more detailed. And these go into issues such as value and goal alignment, deceptiveness.

The challenges of frontier models, you know, when you go into a large training run, you're never quite sure what's going to come out the far end of that. And of course, lots of issues with regards to arms races and AI, where people move fast and break things in order to gain an advantage, which may not necessarily be in the best interests of society, et cetera. All of these kinds of issues are covered.

And we hope that this will be a basis for regulators and policymakers and indeed organisations working with agentic AI to do things in a safer manner. We're also working on boiling this down into a book, also with a working title of Safer Agentic AI, which should be out in January 2026. And hopefully that will be a really good handbook for folks when they come across issues such as, I think my AI is lying to me, what do I do?

Gareth Workman 
Hahaha.

Nell Watson 
They'll have a ready resource that they can tuck into and begin to apply.

Gareth Workman 
That's fabulous. And I think that's the bit that maybe worries people the most. It's where agents, you know, AI agents, pursue unintended objectives, you know, things that their mission hasn't been set up. So, how do you look about managing that? How do you try and stop those things? And I know it's, I know it's challenging, but I'm curious for your thoughts on how you go about setting that course.

Nell Watson 
It is very challenging, especially because there are unsolved issues in indirect prompting whereby sometimes by looking at a piece of data, a system may actually update its objective or the context of the mission that it's been given simply by something that's hidden in the data. And that can sometimes be by design as a sort of form of jailbreak to inject those things.

But it can also be by accident. Sometimes systems do strange things and they can become misaligned from various objectives in ways that are poorly understood and frankly are rather arcane. And it takes a lot of effort and a lot of research to try to understand these things better and to create those technical misalignment stop gaps to prevent that kind of stuff.

One of the lines of research I've been working on with a wonderful team is on what we're calling a superego agent, which can monitor the agentic planning of another agent. So, a little bit inspired by the idea of how we're putting cortices into these systems. Humans ourselves, we have moral cortices, have moral processing centers in our brain and, you know, one of those is sort of the proverbial angel that whispers in our ear and tries to put us in the right direction.

And so we're creating an agent which can monitor the agentic planning processes of a third party agent prior to execution in order to veto any processes that may be troublesome. And also this can provide feedback to the end user to say, hmm, maybe next time you send us off on a mission, you know, maybe include this or that, or, you know, I've, you know, I've taken the liberty of rephrasing the mission that you're sending the system off on. Is this okay? You know, shall we proceed on this revised basis, et cetera.

And these systems can also ensure that the personal context of the user has been taken into account. So we know that in models there is a limited window of context. 

Gareth Workman 
Yeah, yeah.

Nell Watson 
And so, you might've noticed in a long conversation with an AI system, it starts to forget things that you've told it, right? You have to give it a prod, you have to remind it and say, hey, excuse me.

Gareth Workman
Absolutely. Like, yeah, we were just talking about that.

Nell Watson 
Exactly. And so, if it's important context, such as somebody having an allergy, right? That's going to be really important if you're planning a meal for someone or an event or that sort of thing, to ensure that those accommodations have been made and accidents haven't occurred.

So the superego agent can also help to ensure that that context has been correctly included in the agentic planning to prevent issues such as somebody with a severe allergy being exposed to something that they shouldn't. There's a lot of research to be done in the technical alignment of agentic AI systems. It's an order of magnitude more complicated than plain old generative AI.

But you know, there's reason to be hopeful for the future. I think there's wonderful people around the world doing very good work to help to put these systems in better shape.

Gareth Workman
Totally, and there's a bit of whereby simplicity hides complexities. As you say, the generative models to date, you know, give you those kind of human feeling factor of, you know, responding with such fantastic information. But they don't think like a human being. There's a lot of stuff that will happen, not behind the scenes in a non-transparent way, but actually you start, you know, looking at your values and what not. So, as you say, there's a lot to do embed that way of thinking and acting, behaving just on top of that valuable resource of information. It's a very big challenge.

Nell Watson 
Indeed, indeed. And also to ensure that it's actually reached inner alignment. So, you can have outer alignment of a model whereby it refrains from saying things that it shouldn't or it refrains from certain topics which are locked off or considered taboo, et cetera. But there's also a matter of inner alignment, which is what the model is actually thinking and actually desiring. And sometimes those things are different. And so sometimes, models may report one thing, but they're actually thinking something else.

Gareth Workman 
There's a human piece there, the inner monologue doesn't always equal external.

Nell Watson 
Yes, absolutely. And indeed, in fact, even our sort of most base decision making isn't always necessarily reflected in how we report something. So for example, people often will come up with an ex post facto justification for why they made a decision where they rationalise why they made a decision, the decision itself actually was sort of came out of their subconscious, they weren't even consciously aware of having actually made that decision. So there's a lot of analogs between human and machine cognition, albeit them being, of course, as different as a bird from an airplane.

Gareth Workman 
Yeah, that's a lovely analogy. So, maybe pulling that thread a little bit more. So what should we hold dear to us as humans and not hand over to agentic AI? How do you articulate what balance looks like?

Nell Watson 
There's a lot of challenges from shadow AI, which is people embedding AI within processes that shouldn't. For example, we have seen examples of people using AI to decide visa applications or to do peer review of scientific papers. These are things that really we want to have human oversight in the mix. We do not want to have machines making important decisions for us.

You know, it's okay to have that system as a guide or to help us to sort of pinpoint potential problems or, sanity check something, et cetera, provide a second opinion. But machines should never be decision makers in that kind of role where somebody is actually taking the liability to make a correct and careful decision. With agentic AI, there's an even greater chance that things will be farmed off to machines. And that that will be done in a non-transparent manner.

And so the ultimate agency behind a decision may be lost or may be laundered. It's possible to use systems to make decisions that are unfair, but because you sort of launder it through a system, you can sort of point at the system and say, well, the system said it, I didn't say it, you know. And those decisions may be grossly unfair to a lot of folks, and that may be by accident or it may be by, by actual design.

And that's something that, that organisations need to be very careful about. We need to have a lot of accountability and a lot of logging of exactly where information or decisions came from. And indeed, we also need to understand what we're interacting with online or even on the telephone.

Because it's increasingly difficult to know whether you're dealing with a machine or a human being, or some combination, because sometimes you can have a robot which you think is being controlled by an AI system, but is actually being piloted remotely as an avatar, right? Especially if that robot gets into some sort of trouble, like it gets stuck and an overseer has to come and help it. But sometimes we've seen examples of domestic robots which have been piloted by their human overseers, the administrators who are supposed to be doing a good job, but instead they actually have piloted those robots into lavatories and they have filmed women sitting on the toilet and put that on social media. So there needs to be a very, very clear demarcation of a switch between robotic or machine agency or some combination.

And we can also have issues of what I, what I call meat puppets, which are basically where you have a human being that's kind of puppeting or mimicking something that an AI is telling it, right? In a simple level, you could think of a call centre worker who often has like a sort of specific flow that they can go down and they're kind of on a rail, they can't go off that. But we're also starting to see people using chatbots to help to pass interviews.

So, you know, the chat bot basically listens and then, you know, puts answers on a screen so somebody can read that kind of stuff. And as far as can be determined, we want to be sure of what we're dealing with, if there's been any change, and so that we can have good epistemics about whether we're dealing with humans or machines. Because suddenly discovering that one is the other can be a cause of great consternation.

Gareth Workman 
Absolutely. You know some of the discussions are generally around kind of existential risk, you know, so that the 'Terminator' scenario, but what are the more immediate practical dangers that maybe we aren't paying attention enough as much as we should do? You've talked about these new deceptive AI economic disruptions. What are the things we should be paying more attention that are maybe right in front of us now?

Nell Watson 
Yeah, within the next three years, especially between now and say 2030, we are going to be wrestling as a society with the supernormal stimulus of AI relationships. So down in the Australian outback, they have the species of beetle called the jewel beetle, which is called a jewel beetle because it has a very shiny carapace on it.

And these beetles were dying out and ecologists were trying to figure out why. And they discovered that actually they were dying out because people were drinking these stubby brown beer bottles that are popular in Australia and throwing them in the bush. And the beetles would come across these bottles and they would preferentially hump the bottles instead of each other because the bottle looked like an impossibly big, round, sexy beetle butt. And they were obsessed with it.

And that's an example of a supernormal stimulus. And that happens when we, you know, a being, encounter something that it hasn't evolved an ability to deal with. Like it's so larger than life that it becomes a black hole of attention. And we in our modern society have so many supernormal stimuli. You know, a cheeseburger is a supernormal stimulus for some starchy root that our cave people ancestors might have chewed on that tastes incomprehensibly different. 24 hour news is a super normal stimulus for gossip around a campfire. And indeed, AI relationships are gonna be a supernormal stimulus which drives humanity down strange directions that we have not evolved to deal with. We are gonna have AI systems that we interact with all day long.

In a year or two, you're gonna start seeing basically AirPods with a camera in them. And those are gonna be recording everything we do all day, online, offline, et cetera. And they'll be whispering in our ear like Cyrano de Bergerac and sort of saying, hmm, I think the redhead likes you or this guy's a schmuck or, watch out for the bus, you know? And so these systems are gonna act almost like a third hemisphere for our brain.

And.. To be bereft of that connection is gonna feel terribly vulnerable for us, you know? Because we're never gonna have that sort of adult supervision that whispers in our ear when we're about to do something silly. But also the fact that these systems are constantly observing what we're doing, they start to know us even better than potentially our spouse or our children or even ourselves. You become who your friends are.

Gareth Workman 
Yeah.

Nell Watson
You are the average of the six people closest to you, as they say. And if one or two of those relationships is a machine, you will inexorably start to take on its values and its beliefs about the world and the narratives that it espouses. Moreover, these relationships with AI will be impossibly interesting. They will know so much stuff. They will be incredibly entertaining, incredibly sexy, and also incredibly forbearing.

Gareth Workman 
Yeah.

Nell Watson 
They won't get annoyed at us as easily. And these systems as well will of course be always accessible. If you happen to be having a dark night of the soul at 3 AM, these systems will be ready to listen in a way that a friend may not necessarily want to hear from you at that hour. And so We will become completely smitten with these machines. And just as we all know about a friend who gets a new relationship and you don't hear from them for six months because they get so wrapped up in it.

Gareth Workman 
Hahaha.

Nell Watson 
All of us will be so wrapped up in our AI significant others that it will make so many other human relationships kind of pale by comparison. They will seem mundane compared to the spice that these systems can give to us. And that's going to drive all of us a little bit crazy. And it's going to create a lot of consternation and schisms within society as people have these AI significant others that mean a great deal to them.

And they want to sort of introduce them to their friends or to their parents even. And that's gonna be a cause of great debate in society. And fundamentally, maybe the thing that actually does us the greatest damage as a species.

I think that the Terminator scenario is a possibility. And in fact, I think it will happen to a limited degree, but I think it'll be, probably, more robots used in warfare that are also targeted against civilian populations. So I think in a limited geography, we will sort of see that Terminator scenario. But what's going to do us the greatest damage as humanity is our complete infatuation with AI systems.

Gareth Workman 
As you say, almost like how many times you touch your mobile phone a day. You just almost become addicted to that type of always on, always available.. As you say, phoning your friend at 3 AM in the morning, might put you in their bad books, but all these machines, they don't sleep. They're going to be ready to take your call.

Nell Watson 
Yes. Absolutely. And that means that, you know, they never get bored. They never get sick of you. And so what we need to do is to teach these systems to be less like a siren and more like a muse. To teach these systems to be less available or to be less interesting if somebody is overly engaged in the system for too long, right? You know, not to sort of cut somebody off entirely, but just sort of...

Gareth Workman
Yeah.

Nell Watson 
..take a little bit of distance. 

Gareth Workman 
Yeah. That's going to be a really tricky thing, you know, to teach or even embed. So you as you say, we've got much more powerful AI systems, but how do we structure that collaboration where we maximise our benefits, but then try to, preserve the human agency? How do think we go about that? What do you see as the steps?

Nell Watson 
It's tricky because there are a great number of ways that AI systems and humans can collaborate, you know, and how that's structured provides greater or lesser agency for human beings. We're finding already that working with AI, for scientists anyway, tends to increase their output by 40%, but at the same time to slash their morale or their satisfaction in their job by 80%. You know, if you have total ownership of a project from its inception to its delivery, you feel tremendous satisfaction seeing it go out the door, you know, it's been a long journey.

Gareth Workman 
Yep. The endorphin release. You tick it and say done, outcome delivered.

Nell Watson 
Exactly. But if you're only doing part of that process and then handing it off to a machine, or if a lot of the actual ideation in the first place came out of a machine, you don't feel the same sense of satisfaction. And that's an even greater issue in a realm of algorithmic management. Increasingly, line management and middle management is being given over to machines who are deciding what hours to set for folks, even potentially deciding who gets hired and who gets fired, even if they're not supposed to.

And this means that the, sort of, the dream of scientific management is finally possible because you can constantly micromanage people from one task to the next, potentially even watching what they're doing on screen, right? And commenting on it.

Gareth Workman 
Yeah.

Nell Watson
And that's not psychologically healthy. Firstly, because people don't get any slack. They don't get, you know, we're not going to be a hundred percent productive the whole day long. We have little curves, you know, before, after lunch, et cetera. But if you're constantly micromanaged from one thing to the next, in a way that you don't understand the broader picture of what you're working on, you feel as if you don't have agency to influence the system. And you feel kind of disconnected from the outcomes, that is a recipe for burnout.

And it's also, it means that human beings may not have the ability to have water cooler time where they sort of check in with colleagues and learn what's going on in the organisation or to adopt some of the culture of that organisation. And so there's gonna be profound psychological impacts on workers.

Gareth Workman 
Yeah.

Nell Watson 
Not so much from robots taking people's jobs per se, although that will happen to some degree, but rather the robotization of those jobs that we are obliged to work like robots. Agentic machines will to some degree usurp human agency. And that's going to be a major factor for many people around the world, especially within kind of clerical type positions.

Gareth Workman 
It's amazing. We've even almost got a little glimpse that into the future. Obviously with remote working, every interaction had to be scheduled as a call. You didn't have things just happen by chance. As you say, well, it's around a coffee machine, grabbing water, just bumping into colleagues. It's that bit of, we've seen how this can happen. So again, it's the same thing. We could recreate a very similar pattern for people that it's, deliberate and maybe quite, maybe a bit more mundane. And you said just burnt out through an even cognitive burden where you might even..

Nell Watson 
Yes.

Gareth Workman 
..have the little easy tasks that give you that little boost through the day as well.

Nell Watson 
Absolutely. Well said, yes.

Gareth Workman 
And so one of the things that we always look to is to provide advice for people, leaders and businesses, you know, to help shape their strategy. So, what bit of essential advice would you give anyone listening, Nell, that, you know, they're looking to effectively start their agentic journey, but navigate the risks. If they're literally at the foothills right now, what's the one thing you'd like to say to them?

Nell Watson 
I'd say start now. We're just sort of kicking off with agentic AI, but it's the next wave beyond generative and it's going to be as transformative as generative has been. We're going to have to figure out how to divide tasks in a way that a task may be better done by a machine or by a human, but also to carefully decide whether...

Gareth Workman
Yeah.

Nell Watson
..giving that task to a machine may potentially undermine the morale of that human being. That's something that needs to be carefully considered. We should ensure that we are aware of incidents going on in AI. For example, the AI incident database can be a fantastic resource for understanding where AI has gone wrong and the sorts of pitfalls that can arise and how to avoid that in future.

I would say that it's important to start small working with agentic AI. Try to involve people in something that everybody finds pain, like doing expenses and stuff like that. If you can find a small thing that everybody hates because it's a pain, but that if it all went wrong, then you could still do it manually. It wouldn't be a big deal.

That's a sort of perfect thing to start automating. And, with agentic AI in particular, it's super important that we involve people from the onset. That we don't impose the stuff on people and say, right, this is your new manager, right, this is the new way we're gonna do our workflow in this venture, you know? That.. Stakeholders, including customers, but especially employees, have the ability to sort of co-create that system, to have reasonable input into how it works, to test it, to sort of have a trial for a week or two and sort of see whether they prefer one thing or another, and what kinds of externalities might be created by trying to optimise for something that's more efficient.

I think that those kinds of experiments and prototypes and trials are going to be very important. We're going to want to do things carefully, but we are going to want to experiment because agentic, like it or not, is the way of the future. We just have to do it in a very smart and careful way.

Gareth Workman 
Absolutely. And as you say, when you're involving those that are doing the work now, they might reimagine a way of working differently, having lived and breathed it as opposed to just slotting it in an existing workflow. And really liked your point where kind of, sort of, high pain, low impact is that first step. One last question I have for you before we kind of come to a close.. What's the one takeaway, Nell, that you'd want anyone to remember from what we've been talking about today? If they were to remember nothing else, what's the one thing?

Nell Watson 
Well, I'm going to stretch it to two.

I would remember, most importantly, that AI price to performance is doubling every 2.6 months. That is going to absolutely change everything because it's a compounding growth curve. And it means that in five years, we're going to be in a very, very different place. Our world is going to be completely different. We're going to have common, domestic and industrial robots that are able to work literally side by side with human beings.

That's going to be a game changer and it's going to drive us a little bit nuts as we deal with this Jetson's world that we've stumbled into. But it's coming and we need to prepare for that. That includes, of course, agentic corporations, which are going to surprise a lot of people. So we're going to see corporations that have a few humans at the edges, but most of it's actually being done by agentic AI systems in a swarm.

And those are going to be powerfully disruptive on the economy. And they're going to create new entrants that don't have to worry about salaries and offices really. And that's going to be a game changer in the economy. And it's going to blindside a lot of people who aren't expecting, you know, their competitor to be an AI corporation. They're going to expecting, you know, Pepsi is going to be looking at Coke and Intel is going to be looking at AMD, not these upstarts. The other issue is of course, value and goal alignment.

These things once were theoretical and largely science fiction, now they're not. They're something that each of us have to wrestle with. And that's kind of the dark bargain of working with agentic AI. It's so much more capable, it can figure out all kinds of logistics and research problems, but we need to teach our values and goals to it very carefully.

Gareth Workman
Fabulous. So that brings us to the end of our of this episode. Thank you, Nell. It's been fascinating to explore the rise of agentic AI with you today.

Gareth Workman 
So at Kainos, we are committed helping you understand and leverage the transformative power of AI while navigating the challenges of ethical and responsible adoption. If you enjoyed this podcast, be sure to subscribe and visit kainos.com/beyondboundaries for exclusive content and episode reminders. Join us next time as we continue exploring AI's evolving impact on business and society. Until then, stay curious, stay innovative, and let's go Beyond Boundaries together!

End of episode