Lunchtime debates: What’s the real promise of generative AI for organisations?
You’re at an industry conference, it’s lunch time, and you’re catching up with some colleagues and peers. Inevitably, the topic of ChatGPT, Bard and other text-based generative artificial intelligence (AI) technologies comes up.
Your friend says: “Generative AI like ChatGPT is the future of business! Its capabilities are limitless and it can solve pretty much any problem that businesses face. You can automate your customer service, sales, and marketing processes, all while cutting costs and improving efficiency. The technology can identify patterns that humans would not be able to spot. That means you can make better, data-driven decisions.
Really, I see it as the ultimate solution if you want to stay ahead of the curve and beat the competition. You can learn what your customers really need, provide personalised recommendations, improve customer satisfaction and increase sales. And let's not forget about the time savings. It’s a game-changer!”
Your colleague doesn’t agree, however. He responds: “It’s a game-changer alright, but not for the better. I don’t trust it. This stuff is not trustworthy. It's only as good as the data it's trained on, and if that data is biased, those biases are going to be reproduced in its output. This could have serious implications for society, as it could perpetuate existing inequalities and make it harder for marginalized groups to succeed.
And then there are security and privacy risks, they are collecting vast amounts of data on users. It can also be used to spread misinformation and propaganda, which can have serious consequences. We've already seen examples of this happening on social media platforms, and it's only going to get worse now that these technologies are so widely accessible. I don’t like it one bit, I don’t think businesses should be touching it at all.”
Our view? We have watched these kinds of debates unfold in the last couple of months, and we have been exploring the potential of text-based generative AI technology in safe, sandboxed environments. And these are our reflections:

The potential
Generative AI and their underlying foundation models can provide huge benefit to citizens, and businesses can accelerate in their impact through this technology as it is today. Many of the points those excited by the technology make are valid: there is an opportunity to connect information in novel ways, to achieve significant efficiencies by shortening the time to, for example, draft a business case, compare documents, generate collateral or even entire presentations. Those more sceptical also have a point, however. The technology is not infallible.
The strengths and weaknesses
Foundation models (large language models like GPT-4 that underlie generative AI technologies like ChatGPT) are ‘pattern detectors on steroids’ and because of this advanced knowledge of ‘what likely comes next’, they can make very sensible guesses about what is likely to be a good answer to your prompt. This is their power – never before did we have such an easy way to draw connections in language, bring information together or retrieve it, summarise content, create new convincing text and there are countless potential applications. We really do believe it will revolutionise the way of working.
However, the models’ strengths are also their weakness. As these tools generate content based on language structure and not knowledge, they are likely to generalise and make factual mistakes in a convincing manner. This is something organisations considering deploying these technologies need to keep front of mind to help inform the kind of use cases that are selected. For example, areas where errors can have big negative impact (like giving legal or medical advice) should only be considered - if at all - with serious human controls and extensive testing.
Important tools in the technical toolbox
From a technological perspective, there are various tools interested users can deploy to maximise their chance of creating a meaningful and effective solution using these technologies. Fine tuning is an example of one such tool. This is a method where the consumer provides additional data to the base model, which helps the model become more familiar with the relevant context. Fine tuning increases the likelihood that the model will recognise more nuance of a particular context.
Another essential tool is prompt engineering, which is the skill of designing questions and prompts so as to achieve desired outcomes. It generally involves crafting prompts that are clear, concise, specific and give the right amount of information. For example, rather than only asking to “generate a list of potential product ideas for our company”, you may first briefly tell the generative AI tool about what your company does and cares to explore, then ask it to generate a list of products that achieve X and appeal to Y customer base.
The need for ethical control
Your more sceptical colleague shared several valuable points. Foundation models that are built of large amounts of data can be biased and risk invading privacy, and generative AI can provide misinformation. However, the ethical risk and potential harms are not the same for each use case.
For example, using generative AI to rewrite text to personalise it for some personas may be more vulnerable to bias than a solution that summarises specific user manuals fed to the tool. Moreover, much of the potentially harmful impact can be managed, for example, by ensuring appropriate human oversight.
We run harms workshops for individual AI use cases to map out potential harms and articulate harms mitigations. The output of these sessions feed not only into impact assessments, but also help inform subsequent design choices in the development of AI solutions.

The legal landscape
It is almost always advisable to conduct a legal review of new organisational solutions, but it is still worth highlighting explicitly in this space, because the legal landscape is complicated and actively developing. Questions could be raised around the approach to data collection and data processing of the training datasets behind foundational models that underpin generative AI solutions, with regard to GDPR compliance and potential breaches of terms and conditions of websites from which the information was scraped.
In addition, the ability to be granted copyright on content created with the assistance of generative AI is questionable. All this is in the context of an evolving legal and regulatory landscape, the impact of which organisations would be wise to prepare for. Incidentally, many of these legal considerations and others discussed here are as relevant to AI services that generate images as they are to AI that generates text.
The best way to leverage the potential
To leverage the potential of generative AI services, we recommend not to wait, but to exploit their strengths while learning about their weaknesses. Here are some of the things we’ve learned so far:
- Move at a pace which is inverse to the risk taken. When the risk level of the use case is high, in terms of its scale, its potential for harm and/or the reputational impact to your organisation, the focus should be on a controlled deployment with a strong emphasis on testing (both the solution and harms mitigations), adequate training of users and post-deployment monitoring mechanisms. For innovative experimentation, sandboxed deployments and for low-risk use cases, it can make sense to move with pace and agility and a focus on learning, without skipping over the step to think through the impacts and necessary controls.
- Think of text-based generative AI as a powerful engine for converting or elaborating on language, rather than a knowledge dataset. For example, it is uniquely positioned to generate base content or starter code, summarise text, or convert language into code or queries.
- Be clear on the question you are asking and build in technical (e.g. prompt engineering) as well as non-technical (e.g. human review) controls to achieve the answer you are looking for.
- The human in the loop is often your most valuable asset in the chain, teach them well.
That is our view of generative AI. Lastly, the biggest impact will often only be achieved when you look at this technology as a piece of a wider infrastructure. You may need a platform for storage and user interface, and design and research should be invested in the user experience.
At Kainos we have extensive experience in end-to-end digital transformation, both in the public and commercial sector, and we bring years of expertise in artificial intelligence.
We can provide support in deploying exciting new technologies like generative AI in a well-considered manner. If you would like to learn more, please contact our Data & AI team.
And yes.. we did use generative AI to help us write some sections* of this article, after which we spent time making it right and making it our own. *The debate in the introduction section and the example of prompt engineering were generated with the help of ChatGPT from OpenAI.