10 predictions for AI in 2025
Generative AI took 2024 by storm, with the global market size now reaching $25.86 billion. Market research indicates a projected increase of 37% to $37.89 billion in 2025.
Advancements in AI technologies like natural language processing, computer vision and generative networks are now capable of generating high-quality, human-like content with increased accuracy. Businesses in entertainment, healthcare, marketing and design are already unlocking generative AI’s potential to revolutionise content creation, product development and customer engagement. This technological shift isn’t just about improving efficiency – it’s reshaping how organisations tackle creative and strategic challenges.
As 2025 approaches, our experts at Kainos share their top predictions for how AI will evolve in the year ahead.
1. We’ll move beyond the hype to practical innovation
By Gareth Workman, Chief AI Officer
We tend to overestimate what technology can achieve in the short term and underestimate its potential in the long term. This has been particularly true for AI. As we move beyond the initial hype, 2025 will usher in more meaningful and transformational applications.
I predict companies will reimagine their businesses through an AI-enabled lens, focusing on harnessing the power of AI to solve real-world problems in novel ways.
Within our AI Catalyst team we’re already delivering phenomenal results. In just five days, we’re helping customers solve complex business problems with practical AI solutions. The pace of change has never been faster, and this momentum will only continue. Reimagining our world through the practical application of AI is what will ultimately shape our future.

2. The convergence of AI innovations will edge us closer to Artificial General Intelligence (AGI)
By Aislinn McBride, Chief Technology Officer
This year, I predict major advancements in the speed and accuracy of Generative AI, alongside the rise of specialised models and personalisation. Together, these developments could bring us closer to a form of AGI. Knowledge workers using AI – like financial analysts and content creators – will be at least 30% more productive than those who don’t.
In the public sector, AI adoption has the potential to cut processing times by 40% while improving citizen satisfaction. Governments using AI effectively will see better decision-making and policy implementation, with early adopters achieving big improvements in service delivery and public satisfaction rates.
3. AI will become a strategic thinker
By Richard Webb, AI Solution Consulting
In 2025, AI will go beyond automating tasks and boosting efficiency. We’re now seeing the emergence of AI that can reason, understand the impact of its decisions, and even catch its own mistakes. This means AI can be applied to complex tasks like designing new services, creating go-to-market strategies, and even driving new business models.
But it's not just about the technology. Success will come from how we choose to use it. A smart, strategic approach to AI will be the key to unlocking its full potential and thriving in this new era of innovation.
4. Large Language Model (LLM) growth will slow, but innovation will continue
By Krzysztof Suchomski, Head of Technology (Data and AI)
The rapid scaling of large language models (LLMs) will reach a plateau, with models from organisations like Anthropic, OpenAI, Meta and major cloud hyperscalers converging in terms of capabilities.
However, progress in the field will continue at pace. We’ll see breakthroughs in smaller, more efficient models and specialised generative AI tailored to specific industries. New architectures, like OpenAI’s innovative o1 model, will push the boundaries of what these systems can achieve.
Improved hardware will also play a key role, reducing costs and speeding up processing. These advancements will empower organisations to develop smarter services and tackle a broader range of challenges, from personalised customer experiences to cutting-edge industry solutions.
5. Small language models for personalised, cost effective solutions
By Conor Martin, Senior Software Engineer
We will see a rise in discussions around smaller language models. While the focus until now has largely been on scaling up, the future of AI could be about scaling down. In 2025, we’ll see growing interest in smaller, specialised language models designed for specific tasks. These models require less data and are cheaper to train, making them an ideal choice for startups and smaller businesses.
But this doesn’t mean LLMs will become obsolete. Instead, the real opportunity lies in combining small and large models to create efficient, powerful solutions.
For example, a retail company could use a mix of large and small AI models to enhance operations. For complex customer queries, they deploy a large model, while smaller, specialised models handle tasks like processing returns or managing inventory. This combination allows for efficient, cost-effective operations, with the large model providing quality interactions and the smaller models streamlining routine tasks. It helps the business stay agile, offering personalised services while optimising AI costs.
6. Agentic collaboration
By Matthew Lamb, Experience Design Principal
I foresee a significant increase in the adoption of AI agents with specialised knowledge. These agents will work together – and with humans – to tackle complex problems more effectively and accurately than any single large language model could manage on its own.
While AI will play a key role in solving problems, human judgement and context will remain crucial in shaping final outcomes. This synergy between humans and AI will not only improve accuracy, but also build trust, speed up adoption, and make AI solutions a seamless part of everyday life.
7. Risk management tools will be adopted
By Suzanne Brink PhD, Head of AI Ethics and Governance
With the upcoming AI Safety Bill consultation, the Responsible Technology Adoption Unit's promise of an updated AI Assurance roadmap, and clearer implementation standards for the EU AI Act, I expect we’ll gain a better understanding – though not all the answers – of what makes an AI system trustworthy.
In response, we’ll see leading organisations adopting tools to manage AI risks more effectively. This includes AI governance platforms (such as those developed by our partner Enzai) and MLOps solutions designed to scale AI system assessments in line with evolving standards. This won't all be robust by the end of next year, but we'll see some important steps being taken.
8. A double-edged sword for cybersecurity
By John Sotiropoulous, Senior Security Architect
We will see adoption of GenAI Security maturing and Agentic AI coming into focus, bringing autonomy and scale into a variety of use cases. Cybersecurity will benefit from this to address a growing workload, especially in the area of incident response which faces new stricter requirements in the forthcoming Cyber Resilience Act, expected this summer.
But Agentic AI will challenge security, too. Combined with with multi-modals and on-device models, they will create new attack vectors. Their scale, autonomy and dynamic adaptation will help compromise existing guardrails and question human oversight strategies. This will demand clear strategies of hoe we scale AI security.
Expect new security research in the area, and new standards and guidelines from government and standard organisations to help organisations adapt to these new levels of AI.
9. Increased legal scrutiny
By Seto Adenuga, AI Governance and Ethics Manager
2025 is set to be a pivotal year for legal oversight and consumer protection. The EU is updating the Product Liability Directive to cover software, which means technology companies will face increased scrutiny for the performance and potential harms of their AI systems. This legislative shift is expected to trigger a significant rise in lawsuits, allowing consumers to seek compensation for damages caused by AI technologies. While we’ve already seen some of these developments in 2024, I expect the number to increase in 2025 – highlighting the growing need for robust AI governance and ethical standards.
10. Digital humans
By Leszek Gurniewicz, Senior Software Engineer
One of the most exciting developments in AI will be the rise of digital humans, or AI-driven avatars, designed to enhance customer experiences. These virtual assistants are becoming more advanced, offering highly personalised and interactive engagements that mimic human-like qualities. We’ve already seen this with the roll out of our digital panellist Clay at AI Con this year.
Digital humans will be used across industries such as retail, banking, healthcare, e-learning, and hospitality, to provide personalised services. However, overcoming the "uncanny valley" effect, where avatars are nearly human but not quite, will be key to ensuring users feel comfortable interacting with them. As AI technology advances this year, we’ll see more exciting developments in this space, transforming how businesses connect with customers.
What to do now
If AI is part of your strategic roadmap for 2025, here are some key considerations:
- Think big, start small: Don't just chase the big, flashy AI projects. Focus on a mix of smaller, achievable wins that deliver value quickly, alongside those ambitious "moonshot" ideas. This balanced approach ensures you're getting immediate returns while still pushing the boundaries of what's possible.
- Data is king: The most successful AI strategies are built on a foundation of strong data infrastructure. Invest in the systems and processes needed to collect, clean, and manage your data effectively. This will allow your AI to learn and perform at its best.
- Cultivate AI talent: Having the right people in place is crucial. Invest in training your existing workforce on AI and actively recruit individuals with AI expertise. This will ensure you have the skills needed to develop and implement your AI strategy effectively.
- Prioritise responsible AI: As AI takes on greater responsibilities, ethical and security considerations become paramount. Develop clear guidelines for responsible AI development and usage, addressing potential biases, ensuring transparency, and promoting fairness. Ensure your Cybersecurity teams understand the new adversarial threat landscape, employ appropriate defences and update their operations and playbooks to safeguard AI.
- Don't obsess over LLMs: While large language models are important, they're becoming increasingly common. Your competitive edge will not come from which LLM you choose, but from how you use AI to leverage your unique data, knowledge, and business context.

Responsible technology. Remarkable outcomes.
Powered by nearly a decade of AI research and innovation, we've built our reputation for excellence on a foundational commitment to ethical, responsible and secure AI solutions. If you’d like to find out more about how Kainos can support you on your AI journey, you can contact our team for more information.