The AI Safety Summit: A New Era for Global Tech Governance

For anyone working in defence and security, Bletchley Park is synonymous with strategic advantage and operational breakthroughs. Considered by many the birthplace of British computing and widely regarded as a place of pilgrimage for cryptographers and cryptanalysts the world over, it will forever be associated with Alan Turing, designer of the Bombe machine that decrypted the Enigma code, and originator of a celebrated method for determining computers’ intelligent behaviour based on their apparent equivalence to that of humans – what has since come to be known as the Turing Test. It has, in turn, become a symbol of UK soft power and of the UK’s science and technology capabilities.

Seven decades later, Bletchley was the obvious choice for a venue in which to convene government representatives, technology industry leaders, and academics to discuss the risks of frontier AI, defined by the UK as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.” Like Bletchley itself, the success of the AI Safety Summit was by no means certain. In the run up, commentators noted that very few world leaders were scheduled to attend, most instead choosing to send their deputies or technology ministers. The publication of the G7 group of nations’ agreement on Guiding Principles and a Code of Conduct on AI and President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI just two days before the summit was interpreted by some as dismissive of the UK’s own intention to achieve agreement on a set of principles.

These assessments fail to appreciate that the singular success of the summit was its determination to include all the relevant stakeholders. Despite reported push back from other nations and disquiet among parliamentarians, the UK government insisted on extending an invitation to China. As a result, it has achieved the unprecedented: an agreed set of principles signed by twenty-eight countries plus the EU, spanning six continents and including both the United States and China.

While not legally binding, the Bletchley Declaration is distinctive in its assertion that “All actors have a role to play in ensuring the safety of AI: nations, international fora and other initiatives, companies, civil society and academia will need to work together.” It may be tempting to see Elon Musk’s attendance at the summit and his accompanying fireside chat with Prime Minister Rishi Sunak merely as publicity stunts. But this neglects the crucial fact that technology companies, not governments, are the ones with the practical firepower. It is largely thanks to them that AI has developed as successfully and rapidly as it has to date. Sister processes for AI regulation in the EU, UN, and US have consulted tech companies but not given them a seat at the table. The UK approach gave due recognition to the fact that the technology industry is not merely the target of AI policy and regulation but a shaper of it. It openly acknowledged the New World Order in the Information Age. 

Achieving consensus is easier when the terms are broad, and it’s fair to say that the declaration is light on how exactly frontier AI is to be made safe. Rather, it sets out an agenda for the future, focused on:

“identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.”

and

“building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.”

The aspiration is that the detail of how this might be achieved in practice will be fleshed out in time for the next AI Safety Summit, which the Republic of Korea has offered to host virtually. For practical reasons of expediency, the detail is unlikely to diverge significantly from the provisions of Biden’s Executive Order, the EU’s AI Act, and the G7 principles. 

The extensive arrangements elaborated in the Executive Order include industry standards for developers, testbeds and red-teaming to identify vulnerabilities, agreed audit processes, reporting measures, best practices for financial institutions and critical infrastructure providers to manage AI-specific cybersecurity risks, and the incorporation of a new AI Risk Management Framework (NIST AI 100-1). The EU AI Act likewise takes a risk-based approach. It identifies as high-risk AI systems that are used in products covered by the EU’s product safety legislation – including medical devices, toys, cars, and aviation. There is an additional requirement to register in an EU database AI systems in the areas of biometrics, critical infrastructure, education and training, employment, essential services and benefits, law enforcement, border control, and RegTech. All such systems will need to be assessed before going to market and throughout their lifecycle.

Both the US order and EU Act introduce transparency requirements to Generative AI such as ChatGPT, including measures to authenticate the provenance of content, disclose when content has been generated by AI, and publishing summaries of data used for training; also, to prevent the generation of illegal content. In doing so, they seek to address the present threats posed by existing uses of AI, conspicuous by their absence from the Bletchley Declaration, and whose perceived omission from the UK government’s priorities for the AI Safety Summit led to public expressions of concern by the TUC and others. In order to ensure alignment with international policy initiatives and regulation, the government may be compelled to review its stance, expanding its focus also beyond safe development to encompass safe use.

Such a review would be a welcome shift for the thousands of British businesses and public sector organisations who already use Generative AI and Machine Learning, many of whom will be subject to EU and US legislation where they have links to those jurisdictions. While the future threats of frontier AI are a legitimate concern, organisations of all sizes need guidance now on secure and appropriate use of publicly available Generative AI, transparent deployment of Machine Learning, and how to ensure the security and integrity of training datasets. And as guidance around safe usage is created it is important that business, as those with practical experience, has a role to play in shaping it. The UK government has staked a claim in collaborative tech governance. It will now need to play its part.

Dr Victoria Baines

Dr Victoria Baines is a BFPG Senior Research Fellow