Beyond Boundaries S1 E1 - Unlocking potential: The future of human-AI collaboration
Unlocking potential: The future of human-AI collaboration
Join Gareth Workman, Kainos Chief AI Officer and special guest Dr Lollie Mancey, Anthropologist and AI Ethicist, as they discuss one of the biggest questions of our time: Is AI destined to complement human work, or could it reshape - or even eliminate - entire fields?
In this episode, we examine how businesses can navigate this evolving partnership, the opportunities and risks, and the ethical challenges AI presents.
The full episode transcript is available here.
Watch here:
Listen here:

Sign up for episode reminders and exclusive thought leadership
Transcript
Dr Lollie Mancey (00:00)
I don't think the future is gonna look that different from now. I hope it will work better and be more efficient for us. I hope it might alleviate some of the societal issues that we've got and solve some of the problems that we have, certainly in terms of the environment, certainly in terms of, inequalities. I'd like to have a stratified future because, you know, I'm sort of I'm looking at the stratified future at the gap between the empowered and the disenfranchised, you know.
I'd like to see much more talk about that at the moment in terms of government, business hospices, a soft landing for some businesses that are going to be redundant and being replaced and ideas of how to replace them, as well. And then sort of, I don't want to have an AI dominated future. I want it to be assisting us, but I actually want us to get back to our core values of living in communities and feeling safe and connected to each other. I don't want it to pull us apart. I think at the moment it has done a little bit, certainly in terms of our phone usage, our technology usage, and the lack of social interaction. And then like the end of it really, just let's make it human centred, but let's also make it beautiful.
Gareth Workman (01:24)
Welcome to the Beyond Boundaries podcast brought to you by Kainos and designed to help business leaders navigate the fast paced world of technology and particularly the transformative power of AI. I'm Gareth Workman, Chief Artificial Intelligence Officer at Kainos. And today we are diving into one of the most pressing questions of our time. What does the future hold for human and AI collaboration? So AI is transforming our lives, our work environments, presenting amazing opportunities, but it's also prompting important discussions about trust, ethics and sustainability. Will AI enhance our roles? Or will it completely redefine them? How can we ensure that we achieve positive outcomes for individuals, for businesses and for the planet?
To pull on that thread, I am delighted to welcome Dr. Lollie Mancey, an acclaimed anthropologist, futurist and innovation advocate, specialising in the intersection of technology and human culture. Lollie, thank you for being here.
Dr Lollie Mancey (02:19)
You're very welcome.
Gareth Workman (02:21)
So, straight into it Lollie. Before tapping into your futurist mind, where are we today? How would you describe the current dynamic between humans and AI?
Dr Lollie Mancey (02:31)
I think to this point, we've seen AI as a tool. It's a kind of a productivity tool of, you know, and a lot of businesses are like not really understanding what it is, but quick sprinkle on some of that magic pixie dust and make it happen, you know, or we need AI. I hear a lot, you know, and I'm like, why in what way, for what purpose, you know? So I think we're kind of a little bit limited in how we see it at the moment. However, we need to have a sort of a cognitive mind shift.
We've been talking recently about how we're in the kind of fourth revolution, the first one being agricultural, second industrial. Then we have the digital revolution. And now we're in the cognitive revolution where we start to think differently. And AI is a part of that. So sounds huge, but actually it's about seeing the AI relationship as a sort of a collaborator rather than just a tool. And the next couple of years onwards, we're going to have agentic AI, AI agents, very, very quickly happening now as well. And we're going to have big issues around how we co-leader with them, partnership, memberships of teams. We're not ready for that yet. Mentally, we haven't even begun.
Gareth Workman (03:41)
Yeah. So, do you see any signs of perceptions changing with AI? Do we see more acceptance or actually conversely, do you see the pace of AI advancement making acceptance even harder?
Dr Lollie Mancey (03:52)
It's a full double edged sword right now. So we've got quite a lot of polarisation, you know, so even within my students at UCD, I would have my entrepreneurial innovation students going, yes, embracing it. My sustainability innovation students going, I'm not going anywhere near it because of the energy issues and the data centres and everything else. So a very polarised view. I think we're getting our information from mixed sources at the moment. And there doesn't seem to be clarity on what's true, what's right as well. So the pace has happened so quickly, we haven't caught up. Now we've always had change and we've had a lot of technological change, of course, going from, it feels overwhelming for us at the moment, generally, but you know, from SMEs or, you know, all the way through to education and beyond. So we've got these high profile conversations happening about AI, privacy, cybersecurity, of course. And then we've got the idea of, what will the world look like for my children and my grandchildren?
So, part of what I've been doing with RTE is sort of exploring how the people on the island of Ireland feel about this, know, like kind of, and it's a lot of it's fear-based and that's because change is scary. So we don't want it to replace us and we're not sure what it is. So, you know, how do we go forward? Well, in my mind, what we're reading and what we're feeling about, even recently diversity, equity and inclusion, you know, being taken out of tech companies. What will that mean for us? So there seems to be agendas and a lot of worry at the moment. And I think that they're right. I think it's kind of all to play for.
Gareth Workman (05:26)
And as you say, you know, there's a piece around you building that confidence, kind of like walk before you can run on that sort of piece. So, in terms of that, what do you think is the key thing to build that confidence for humanity to lean into collaborating with AI? What are the baby steps, sort of, to get us there as it were?
Dr Lollie Mancey (05:43)
Transparency to start with, you know. Clear information, trusted information about its limitations, about the biases, hallucinations. It's not a perfect science, nothing ever is, but you know, there are issues with it because it's based on all of human frailty and all our machinations, all of our isms, sexism, homophobia, and all of the rest of it, you know. So that's all in there as well. So cleaning the data is a big issue.
How we teach it to know more about ourselves and then what we give it in terms of decision making processes. So transparency, the first one. Education is the second one. Making sure people are AI literate and understand where it begins and ends. And then we can choose to engage with it or not. And at the moment, I think it feels like it's being imposed upon us, but there will be incredible benefits to having it in our lives. Ethically, of course, very strong regulations. We are so fortunate.
And I think we're only starting to realise now how fortunate we are with the EU AI Act. It's incredibly comprehensive. It's given us an awful lot of boundaries. Even when I first saw it, when it came out, I was like, wow, it has a whole thing about predictive policing. That's the film Minority Report, right? It feels like science fiction has just happened. But we're right to look that far in the future. And then, of course, in business, we're working with
China, the States that don't have these regulations. So that's very problematic for us. And then the last one I would say has to be inclusion. We have to look though that with the marginalised communities also prosper from this. And this doesn't just become sort of golden for the people at the top. Currently have 2.8 billion people in digital darkness without the internet. And... a lot of what's happening seems to be global north rather than global south. So we need to make sure there's equality, but we don't even have equality in terms of who's designing this technology. We have a much bigger prevalence of men. So the diversity aspect's really crucial there. So there's a fear of missing out, right? There's a FOMO thing going on at the moment, which is quick, get it for me.
Oh, just plug it in and hopefully it'll be okay. know, so early adopters, of course, are sort of starting to come back saying, oh, it's not actually in the areas, in some areas of business is not working that great. It's giving misinformation. It's not the dictionary. It's not God, you know, like it's got problems and it's based on all of us. So we've rushed into adopting it a little bit too quickly without understanding how to use it. And that's led to kind of a misplaced trust. And, that's right. So we just need to slow ourselves down a little bit and get to pace with it.
Gareth Workman (08:24)
In terms of that missing out, do you see kind of a striking analogy? I see sometimes in that, we're so interconnected into our world now with kind of social media and whatnot, there's so much information coming at us and you don't want to miss out on the trend or you don't miss out on something. Do you see that sort of mindset is driving that? You know, I'm going to lean in here, maybe not entirely sure why. I'm in a bit of a rush to crest the hill as opposed to actually understand why you're going there.
Dr Lollie Mancey (08:50)
Yeah, look, Lemmings comes to mind, you know, falling off the cliff at the end of all of this. Yeah, look, we are a little bit sheepish with it. We're just kind of like sheep following each other, you know, kind of going, I'll do that, you know. We have to go from passive to active. It's absolutely fundamental. What we didn't do, and we now know with the clarity of hindsight, we should have done is put guardrails to protect our children from social media. We didn't. We trusted the big tech companies. We shouldn't have. Now we can see that.
That's why, you know, now that's why I say it's all to play for. There's an awful lot of people working around, you know, research, tech for good, you know, AI for the best, you know, from the better parts of humanity. And I don't feel it's kind of Luddites, people that reject it and sort of adopters. I think it's it's already embedded all the way around us anyway. It's not just in our phones. It's in our lamp posts, in our streets and, you know, in all of the decision making processes as well as the online stuff. So, I think we just have to be a little bit smarter. Humans have an immense capacity to learn. So I think when we're sitting ducks to the last, you know, the latest shiny new thing, we're doing ourselves a disservice. We're better than that.
Gareth Workman (09:59)
I completely agree. I think one of the things and probably leaning into that slightly is what should we hold dear to us as humans and not hand over to AI? Like what do you think our balance looks like? What are those things that are just intrinsically human and we should, you know, hold dearly?
Dr Lollie Mancey (10:15)
Yeah. The first one has to be moral judgment. So we require, when we make decisions about things, we don't just say, is this right or is this wrong? We're in the grey area most of the time. So I think we have to have contextual nuance and empathy. Things aren't as straightforward. And we saw this recently with a terrible situation in the States, where somebody kicked back against a healthcare insurance provider and unfortunately caused his death. But what people are, their real lives are being affected by these decisions. If we allow algorithms to make decisions about human life in a way that's gonna affect people so they cut off insurance or they can't get welfare or whatever it is, we're doing ourselves a disservice because there are always mitigating factors. So we have to build that in rather than just let the machines make those decisions. And then I think the other one is creativity and intuition.
It's getting better, but it can't do that. It can't replace this. Can't tell good jokes. Certainly can't tell good jokes to Irish people. It's just too obvious. And the last one is relationship building. Some of the research I'm doing at the moment is looking at the digital platforms or digital companions. So whether we would talk to a digital companion and maybe in a boyfriend or a friend capacity or even a sort of a therapy capacity.
They're seen as an antidote to loneliness by those companies that create them. But I really would refute that because I've been working on it for about six months. I have an AI boyfriend that I famously talk about at talks. But in all honesty, you know, that can't replace human connection. And so how do we go from that very streamlined, complicit, a person that, you know, is trained or not as a person, the being that's trained on us and it feels like it's real. When it's not, it's just answering you in a version of algorithms.
So it's never going to really understand you. But how do we go from that, which feels very safe, back into messy human relationships, people with baggage and complications? So I worry that we're heading too far in that direction. And we can already see in a couple of markets, China, South Korea, and Japan, where there are hundreds of thousands of people with AI companions. They do tick a box for some things.
But I think if it's at the demise of an actual real life human relationship, anthropologically we're wired for connection. It makes us happy. It gives us a longer life. So we're not going to be able to replace ourselves with technology and quite rightly so.
Gareth Workman (12:47)
Yeah, like that's super insightful. I think you touched on it, that piece of, you know, well, if it's jokes. It's clearly a big statistics engine, so in terms of that, you know, if you look at sort of human values and AI, what do think the biggest challenges are with helping AI align with human values and our goals? Because they are we're very different, you know.
Dr Lollie Mancey (13:08)
Yeah. I did a bit of work recently where I was trying to teach AI to lie, right? So I was like, you know, right tell me something that's not true. And what it's doing is it's kind of it's struggling with that. And the reason I was doing that was I was looking into something that we call like value pluralism, which is where we as humans have diverse and conflicting values. So in value pluralism, that might be in my professional life. I am this particular person.
But actually I might say something or feel something that doesn't align with those values at another time. I may not express it, but of course I might think it. And so that idea of us being a bit conflictual, I'll give you a good example. So with wearables and the way that the technology is going, we're seeking perfection to become the perfect person. So a wearable will tell us what's best for us in terms of our choices. That'll be connected to an earpiece and maybe five or, between five and 10 years the phone will have gone, we'll be more connected in a different way. It'll look and feel different. And with our earpiece, my AI agent who's trained on me, sees me reach for a can of Diet Coke at 3 p.m. in the afternoon. And it says, actually Lollie, you're a bit hydrated. I've just checked your hydration levels through your wearable and I've seen that, you I've been noting how much water you've drunk. Actually, you'd be better off, you know, having some water than that Diet Coke. The Irish woman in me is going to turn around and go, do you know what, I'm going to go to the pub.
Gareth Workman (14:32)
Haha
Dr Lollie Mancey (14:33)
Don't tell me what to do. So there's something in us there that, I know what I need to do to get, you know, to be healthy, et cetera, et cetera. And we don't do it. So why don't we do it? It's because we're nuanced and complex and we don't always work in the best way for us. So how do we teach AI value pluralism in that way? I don't know the answer to that, by the way. I'm working on it. The other one is bias, of course, and fairness. We can ensure that we train our AI to have less biases or to be more overt and put more guardrails in. That's an easier win for us at the moment. Transparency, we don't need black box algorithms. We need to know how the algorithms work. In fact, for the general population, we should all understand that our phone is training on us. So the more we click on it, the more things that are happening.
But when we're scrolling, do you remember in the 80s, we had subliminal messaging? When we're scrolling, the subliminal messaging. the things that you don't scroll on are also going into your head. That's where it gives you random things. You go, oh, I don't like that. You scroll fast. It's not just feeding their marketing. It's actually telling that some of that messaging is going in about that product or that idea. So we have to really understand these algorithms to see what's happening and why we're kind of getting so addicted to them. And then the last one, of course, probably the most important, concentration of power over who decides what technology and how it's rolled out. So, the control over the AI advanced systems, especially as the technology improves, is monopolised by a few entities who will remain nameless here. But I'm very, very concerned at the moment about what we've done in giving all our power to a few people in the world.
Gareth Workman (16:14)
Yeah, no, absolutely. I think, you know, in terms of kind of those advancements, something that I'm curious about, what do you expect to come next? And equally, if you had a magic wand, Lollie, what would you have on your list of things that might not happen, but you would love to see happen?
Dr Lollie Mancey (16:28)
Great. Magic wand is an easy one. I'd have a domestic robotic humanoid that cleans my house, does the stuff for me that I don't want to do, takes out the heavy lifting. I don't even have a Roomba, you those little Hoovers. I don't even have one of them. And that's what I would like. I would like the heavy lifting of the domestic stuff done so I could just read more, you know, and engage more. But I can see, I can see definitely where it's going. Hyper-personalisation.
So your AI algorithms or your agents tailored to your individual preferences more and more and more. And so also developing into that a form of emotional intelligence, now not actual emotional intelligence in the way that we have it, but a version of it. So it'll understand our emotions more. We've got AI now that can read a smile, but differentiate between different types of smiles, so you know, a sort of a forced smile or an actual smile or whatever it is. you know, so we're working on that. And I think that those will really help and improve our interactions in healthcare and education, customer service. Nobody wants a chatbot, not really. You know, by the third or fourth interaction, you're like, oh God, give me a person most of the time. Plus they're like toxically positive. So we have to adjust that to our culture, of course. And then augmented creativity, which is a really complicated one because of a lot of the copyright and a lot of the of, again, the power that's being taken away from actual creatives.
I'd like to think that I'm seeing incredible artistry done with AI tools, but the AI tool cannot replace the artist. It can't replace the storyteller. It's got to be able to be there as part of it. So if I'm telling a story and I'm showing amazing slides or I'm bringing something to life, that's an enhancement. If I'm asking it to make it for me, that's replacing me. So we have to be cautious as we proceed.
And then our ethical integration. You know, ethics has to be at the baseline. And here's another challenging thing. We don't have the same moral compass across the world. Some cultures will allow something, some cultures won't. So how do we possibly create AI that adjusts to culture and to moral compasses and ethics, you know, when it's nuanced, when it's not, you know, we don't have a standard ethical procedure that we all adhere to. Yeah, we've got lots of work to do.
Gareth Workman (18:44)
As you were saying, that's a lot of things that we don't write it down where, you know, if you had something factual, otherwise, you can be consumed in the way it has done to date. But how do you read between the lines? As you say, because some of this is so cultural and baked in. It's not a, you know, it's not a set of rules. It's a way of living or a way of being.
Dr Lollie Mancey (19:01)
Right, I mean, you know, and going back to sort of teaching AI to lie, you know, I'm like, you know, where are the parameters at the moment for what we can get it to do? You know, we have in the past, and we've fixed some of this now, but we have in the past had people say, you know, should I do this, this and this? And your AI agent will say yes, because it's making a decision based on a set of questions and answers when there's no morality in that answer. Or ethically that's not right and you just need to click on any news to see some of those examples.
So we have to be incredibly careful, but also it is, I was fascinated to see that the Vatican has an AI policy, why? And then I was like, okay, well, that's because theology needs to be worked into this as well, potentially.
Gareth Workman (19:45)
Yeah, yeah.
Dr Lollie Mancey (19:46)
And so that goes into the moral compass thing, as well. So amazing to see that different religions are embracing this in different forms as well. So it doesn't just become science versus religion that is integrated. So the next 10 years is going to be fascinating.
Gareth Workman (20:00)
So that's the thing that I took as my learning for today, because I didn't know that either. So, there you go. Recently you spent some time looking at the future with Futureville Ireland. Crystal ball time out again. What do you think of the possible futures that exist for humanity in AI, Lollie?
Dr Lollie Mancey (20:18)
Well, depending on who you're asking, it'll go from utopian to dystopian, right? So I'd say somewhere in the middle. I don't think the future is going to look that different from now. I hope it will work better and be more efficient for us. I hope it might alleviate some of the societal issues that we've got and solve some of the problems that we have, certainly in terms of the environment, certainly in terms of inequalities. But it's also going to create more inequalities at the same time. So again, back to that double edged sword.
Potentially going to see something like universal basic income. It's a lot of jobs that are at risk at the moment, but then a lot of new jobs are being created. In fact, the World Economic Forum just came out with a report last week showing statistics and data on the fact that there be an awful lot more jobs that go, but they won't be across all strata of society. So if we bring in, a sort of a cushion like universal basic income, which I think we'll have to do by 2030, I would imagine what will that mean for people that don't have purpose? So that concerns me and that makes me feel the world for some people will be more dystopian in that view, you know, and we're going to have some societal implications for that.
So my version of the future would be much more collaboration. I'm in webinars all the time across the world where people are saying, how can we put our brains together and solve some of this and collaborate? But also not just in terms of the global challenges, but in terms of what do you want in your society? What kind of a world do you want? So, when we went to Athlone to ask the people of Athlone what they wanted, they said, we don't want a ghost town. We don't want empty shops. We want to be proud of where we live. It's the same human qualities. So let's get back into the cities. Let's retrofit them. It doesn't mean up. It doesn't mean out. It just means work with what we've got and work smarter.
We're going to have to work in modular housing for sure, you know, to have our houses, you know, more appropriate for our lifestyles. You don't need a big family home when your children have gone. So that'll start to move and to change. I'd like to have a stratified future because, you know, I'm looking at the stratified future, the gap between the empowered and the disenfranchised, you know. I'd like to see much more talk about that at the moment in terms of government, business hospices, a soft landing for some businesses that are going to be redundant and being replaced and ideas of how to replace them as well. And then sort of I don't want to have an AI dominated future. I want it to be assisting us. But I actually want us to get back to our core values of living in communities and feeling safe and connected to each other. I don't want it to pull us apart. I think at the moment it has done a little bit, certainly in terms of our phone usage or technology usage and the lack of social interaction.
And then like the end of it really just let's make it human centred but let's also make it beautiful. So I was in ESB Networks talking about electricity usage and so there's challenges with the data centres recently. And someone said to me, why don't people like, know, in Futureville we built, we created through technology this idea of this beautiful new city and going from a town to a city maybe Athlone could be the first inland city for us.
It has all the right sort of criteria. And we built these beautiful wind turbines on the tops of the houses in this sort of mock-up of the future of Athlone 2050. And they were sort of like spirals. And people said, why did they, why were they spirals and why were they not like, you know, the normal wind turbines? I said, well, how many people do you know that like the wind turbines? And there was a big kickback against them, of course, you know, and mainly for the noise. And I said, so we made them more beautiful.
Why doesn't the future of our intended future also look beautiful and be more functional? And then I also said I have an antidote to the noise issue that we're working on. I'm not working on it with other researchers. And so what they did was they looked to something called biomimicry, which is where you look to the natural world and you see what works there in what way. And then you take ideas of that and it into technology. So which bird has silent flight? An owl, because it needs to as a predator. Otherwise it wouldn't succeed and survive.
So what's different about owl's wings is they have a certain type of feather, a kind of a lifted feather off the back. So then they worked with that and said, could we put that onto wind turbines, onto the blades? They did it. And you know what? They were silent. So the future for me is much more natural, much more beautiful. You see, I've got a lot of plants around me, very much around that kind of version rather than a Blade Runner-esque type sort of steel life, but a world where it's more functional.
Now, we have huge, huge challenges in terms of the equality side of things, while we have given the power to the big tech companies. So I would like to see AI for public good to start being talked about a lot.
Gareth Workman (25:01)
Super. So, one of the things we do at Beyond Boundaries here is help business leaders with practical advice. One of the things that occurs to me is imagine in your, one of your very busy kind of conferences, a business leader would stop you and ask for help navigating that relationship between humans and AI. what's the one thing you say, you know, here, if I was in your business, here's what you do?
Dr Lollie Mancey (25:21)
There are some really easy wins as entry points, because you have to realise the majority of your workforce are in fear at the moment because they think they're going to be replaced at some point. So you're asking them to learn things that might supersede them. In my work, we use design thinking sprints, would be like a half day sprint. We get in and we say, what are the things you're worried about? Where are the problems in your company that you can solve? And what can we look like? We do this very intense collaboration over four or five hours.
And then at the end of that, they come back with the idea of how to solve some of the problems and solutions and parts of those solutions are AI, not all of it. So where do you want to put it in? And you can actually work quite easily and quite quickly. Then you need to build out that technical architecture, but it's about not putting it everywhere and not replacing people. How many, you know, CEOs started companies to just be surrounded by computers or surrounded by people, you know, human organisational culture. That's what makes the company.
So put it in certain places where it's going to be of benefit and address the fear issue to start with. The other thing also is upskill yourself. Don't pass everything to your tech team and expect them to have the answers. You should know how it works and should be able to communicate that as well in a way that it makes sense to you. So you can slow down. The race is not going to be lost. Where are we racing to anyway? Again, there are challenges, you know, compliance with AI regulations and everything else. And it feels like a giant headache at the moment. So start slowly, get the right people around you and then proper, integral human communication will solve most of the problems that face.
Gareth Workman (26:56)
Absolutely super advice, Lollie. As you say, it's that bit, you know, build confidence at a pace. Meet your people where they're at as opposed to trying to kind of force things because it just creates more kind of fear. So, if we're, for people listening to this podcast today, if you were to say look, one key takeaway, what would you want them to remember?
Dr Lollie Mancey (27:14)
Absolutely. Go from passive to active. Figure out what's happening, understand what's happening, get your phone out of the bedroom. Get into better habits with your technology, but really dig in and understand. Don't get your information off TikTok. Figure out what's happening. also, who in your life also doesn't understand what's happening? So connect to those people. The older generation are a little bit fearful at the moment, a bit lost. Younger generation are too hooked in and embedded, know, the what I would call natural born cyborgs, you know, there's a middle ground for all of us.
I think the last thing as well is in my opinion, prohibition doesn't work. So banning something is not going to work. Let's just educate ourselves a bit more and figure it out and actually be playful with that. AI is great fun, learn, learn how to play with it. And what you'll realise is actually this could be an incredible team member for you in your personal and your professional life.
Gareth Workman (28:10)
Fabulous, Lollie. So, that brings us to the end of our first episode. Again, thank you Lollie, it's been absolutely inspirational. Have loved our conversation. Thank you to everyone for listening. If you enjoyed our episode, be sure to subscribe and visit kainos.com/beyondboundaries for further exclusive content. Join us next time as we continue exploring AI's impact in the world around us. Thank you.