The AI Safety Summit – 7 Key Takeaways

In this article, Ruth McGuinness, Data and AI Practice Lead shares her 7 key takeaways from the AI Safety Summit that took place in the UK last month.
Date posted
7 December 2023
Reading time
4 minutes
Ruth McGuinness
Data & AI Practice Lead · Kainos

On 1-2 November 2023, the UK hosted the first ever global summit on Artificial Intelligence (AI). The AI Safety Summit brought together a diverse international delegation to contribute expertise towards the need for safety and regulation of AI.

If you’re struggling to find a good summary of the key talking points from the Summit – look no further. Ruth McGuinness, our Data and AI Practice Lead, shares her 7 key takeaways:

1. Bletchley Declaration Signed: 

The EU and 28 countries (including UK, US and China) signed the Bletchley Declaration, an agreement to commit to designing AI that is safe and human-centric. Signatories agreed to cooperate by building a “shared scientific and evidence-based understanding” of AI risks. Each nation will categorise AI risks based on its own legal frameworks. Some media outlets noted China’s participation in particular stating that “all major AI superpowers positively engaged” and seeing this as an indication of global cooperation.

2. AI Safety Institutes for Everyone:

The UK and US both announced the creation of AI Safety Institutes. The UK's Frontier AI Taskforce chaired by Ian Hogarth (the taskforce formerly known as the Foundation Model Taskforce) will transition into the new UK AI Safety Institute. The US AI Safety Institute (AISI) will be run by The National Institute of Standards and Technology (NIST), an agency of the United States Department of Commerce whose mission is to promote American innovation and industrial competitiveness. The two new bodies will ensure alignment between the UK and the USA.

3. Say Goodbye to the ‘Compute Divide’: 

The UK announced ‘Isambard-AI’, the first component of the UK’s AI Research Resource – which will be one of the most powerful supercomputers in Europe. Plans for the supercomputer were announced by the government in March, backed by a £900 million investment to transform the UK’s computing capacity and establish a dedicated AI Research Resource. Isambard AI will vastly increase the UK’s public-sector AI compute capacity bringing it in line with the industrial sector.

4. Regulatory One-Upmanship: 

Two days before the Summit, the US announced an Executive Order on AI - the first action of its kind – we expect stricter safety measures like algorithmic auditing, red teaming and other safety mechanisms. The use of the Korean War-era Defence Production Act to invoke the order - typically reserved for war or national emergencies - drew some criticism as “distorting AI through the lens of national security”. Debate persisted over whether AI should be regulated at the application layer e.g., healthcare, chatbots, and autonomous vehicles, rather than positioning computation thresholds for model training, for fears of stifling innovation – this remains a topic of hot debate.

5. Existential Doom-Mongering vs. Short-Term AI Risks:

The tech community remains divided on the existential threat of AI, but the Summit consensus was on addressing immediate risks, such as mis / disinformation in elections. ‘Loss of control’ discussions were not focused on AI systems becoming autonomous, but on humans building AI systems without appropriate oversight.

6. Open-Source Debate: 

While open source has its limitations, discussions reflected that keeping innovations privately held is also restrictive. A collective approach was recommended to ensure more accessibility, transparency, and safety in AI-augmented decision-making.

Attendees cautioned against the overregulation of AI to allow open-source innovation. This echoes the view of the tech community generally (Mozilla Open Joint Letter), adding burdens to foundation model development will unnecessarily slow down AI’s progress, after all, “today’s supercomputer is tomorrow’s pocket watch”.

7. There’s more where that came from: 

Plans for future AI Safety Summits were confirmed, with upcoming events in South Korea in 6 months and France in 2024. This will help to focus minds as we’ll be expecting to see signs of further progress within a short time frame.

The intentions of the Summit were clear and AI safety remains an ongoing global dialogue in the weeks following and continues to evolve. Acknowledging the quickly shifting challenges ahead, both public and private sectors are recognising the complexities, especially in navigating the balance between safety and business priorities. As AI's trajectory remains uncertain, establishing a safety net for innovation becomes imperative.

At Kainos it’s our expertise on the ethics and trust of AI that sets us apart. We’ve partnered with Tortoise Media to develop a set of leading guiding principles and a framework for assessing companies on the responsible use of AI.

The Tortoise AI Responsibility 100 in partnership with Kainos has been informed by experts in the sector to assess FTSE 100 companies on their use of AI technology.

We hope this framework will allow us to begin measuring progress towards responsible deployment of AI, while also prompting further discussion with industry leaders on best practices.

If you’d like to help us further develop our thinking, or would like support implementing AI within your own organisation, please get in touch.


About the author

Ruth McGuinness
Data & AI Practice Lead · Kainos