It is no secret that the approach taken by the UK towards regulating AI differs immensely from the strategies of other countries, including many of its allies. Even reports by the European Parliament acknowledge this divergence. As per the 2021 National AI Strategy, the UK famously takes a “pro-innovation” outlook, a sentiment reflected by most subsequent attempts at AI regulation in the UK. As laid out by the 2023 White Paper, sectoral regulators implement a “principles-based framework” in their respective sectors. Largely, they have steered away from overarching horizontal frameworks. In stark contrast, the EU AI Act in 2024 laid the groundwork for classifying AI into different risk levels in the European Union, with different obligations associated with each level. The USA under the second Trump administration is also pursuing a very different approach, reverting back towards a voluntary commitment-based policy.
Given the distinct policy choices related to AI, it is easy to be cynical about the prospect of global cooperation on the subject. After all, why waste time and resources deliberating on common standards and principles if they evidently differ? I would argue, however, that global collaboration on AI is, in fact, a useful tool for the UK, both in terms of foreign policy and domestic development.
Of course, the most obvious reason is that collectively shaping governance can help mitigate global risks and security threats that AI might pose in the future. Additionally, sharing information regarding best practices with experts in other countries can help both developers and policymakers within the UK make important decisions. Initiatives that allow researchers from abroad to work with experts in the UK are a common choice in this regard.
I would however argue that there is another deeper advantage, particularly in terms of bilateral cooperation: Collaboration on AI and emerging technologies allows the UK to consistently maintain positive ties with allies and others alike, while avoiding more politically contentious topics. For example, as a report by the Tony Blair Institute notes, AI governance is a good opportunity to work fruitfully alongside the EU, without reviving “divisive debates about a closer political union”.
Mapping the UK’s Bilateral Cooperation on AI
Closer observation yields a clear pattern: The UK has worked towards building AI-related collaborative efforts in a myriad of ways with its strategic partners around the world. When it comes to bilateral ties, there are three broad strategies of AI cooperation that the UK has emphasized. These are collaborations through scientific research, trade, and safety respectively.
In Europe, the UK has pushed for collaborative programs with some major EU countries, emphasizing scientific research into AI in particular. Take for instance the £800 commitment to jointly fund a slew of AI research projects alongside France, agreed in February 2024. Just a month later, the UK reached an agreement with Germany, which included a proposal to set up a joint group of researchers, to potentially facilitate AI-related R&D cooperation. With Switzerland too, the UK has maintained consistent collaboration in science research (including AI-related work), with a recent MoU strengthening this. The Turing Institute has spearheaded an AI Researchers Exchange Programme with Italy since 2023.
The UK has also made efforts to keep the conversation on AI going with several countries in the Indo-Pacific region. This has in some instances come in the form of provisions within trade agreements. A Free Trade Agreement (FTA) with Australia in 2023 outlined plans to set up a Strategic Innovation Dialogue to discuss AI regulatory approaches. Similarly, an FTA with New Zealand included lines within the fine print on governance cooperation related to emerging technologies like AI. With Japan, a Digital Partnership highlighted a shared commitment to ensuring that “democratic principles” shape AI governance, as well as the potential for further collaboration in multilateral platforms. AI Collaboration with India has seen a slightly different, security-tinged flavor, with the UK-India Technology Security Initiative of July 2024 (which had a section on AI) being overseen by the National Security Advisers of the two countries. Perhaps the most prominent collaborator for the UK in Asia (in terms of AI) though has been South Korea. The UK-Republic of Korea Science and Technology Accord in 2023 sought to collaborate on technologies like AI. On top of this, the two countries co-hosted the AI Safety Summit last year, while Innovate UK has invested significantly in Korean innovation.
However, the most striking aspect of the UK’s external cooperation strategy over the past year revolves around the theme of AI safety. The UK set up its AI Safety Institute (AISI) in late 2023, which would prove to be quite a pioneering effort since a number of other countries also set up equivalent institutes. The result has been the ability to create institutional ties to prioritise AI safety within international discourse. The UK-Singapore Memorandum of Cooperation on AI Safety in December 2024 saw the AISIs of both countries agreeing to work towards shared standards and policies. Prior to that, the UK-US Memorandum of Understanding on AI Safety, announced in April 2024, was also a watershed moment. The respective AISIs of the two countries agreed to undertake a “joint testing exercise” as well as information sharing and policy alignment.
Important Multilateral Efforts
The UK has also been involved in several important developments on the international institutional front on AI governance. Back in 2020, it was one of the founding members of the Global Partnership on Artificial Intelligence (GPAI), a group advocating in favor of “human-centric” AI, in line with OECD recommendations. In 2023, the UK collaborated with its AUKUS partners (USA and Australia) and hosted the first trial in a set of trials called the Resilient and Autonomous Artificial Intelligence Technologies (RAAIT). As the name implies this involves work on autonomous vehicles and weapons systems.
The UK took its biggest step in global AI governance in November 2023, when it hosted the AI Safety Summit in Buckinghamshire. This would result in the Bletchley Declaration reaffirming the commitment to human centric AI development, as well as the establishment of the AI Safety Institute mentioned previously.
The UK signed the G7 Ministerial Declaration in Italy in March 2024, which suggested a toolkit on how to deploy AI in public services. A couple months later, at the Seoul AI Safety Summit, the UK got 16 leading AI companies to sign a set of Frontier AI Safety Commitments, essentially encouraging them to mitigate risks. In September 2024, the UK notably signed a Council of Europe Convention encouraging the monitoring of AI deployment to protect civilians from its risks.
Involvement in these multilateral fora allows the UK to be an important voice in standard setting for a technology that is likely to get more and more pervasive over the coming decade. This could naturally be a useful component of the UK’s soft power. However, it must continue to ramp up its efforts at engaging with other countries on such matters. In particular, further cooperation with the EU would be an astute decision. Some have suggested joint efforts at improving computing infrastructure as a means of doing this.
Another potential opportunity is the network of AI Safety Institutes, initiated in San Francisco in November 2024. There is a possibility that the USA under Trump will continue to engage with the growing network of AI Safety Institutes, and the UK could capitalise upon this by ensuring increased communication within this network, while also encouraging countries to set up their own institutions.
Finally, the upcoming visit to the UK by PRC Foreign Minister Wang Yi this month, could be a good time to test the waters on AI related collaboration with China. AI could be a good area of cooperation to contribute to the ongoing process of rejuvenating UK-China relations.
Thus far, the UK has played an active role in bilaterally cooperating with allies on various aspects of AI. It has also been active in participating in multilateral platforms. Only time will tell whether the UK can continue to engage with AI governance on a global level effectively. However, remaining cognisant of the ways in which this is relevant, and being proactive in keeping a seat at the table, should be a priority.
[Image by Haider Mahmood from Pixabay]
Dhruv Banerjee is a researcher focusing on AI policy and emerging technologies. He has a background in security and foreign policy, having worked with various think tanks. Currently, he is at the London School of Economics. The views and opinions expressed in this article are those of the author.
Read the full article here