How foreign policy will have to change in the AI era

Whether talk is of “killer robots”, “death by algorithm”, or an existential “threat to humanity”, much of the debate on how artificial intelligence (AI) will affect international affairs is conducted in broad strokes and grandiose terms. Indeed, as AI promises to be a world-changing technology, multilateral efforts to rein in its uncontrolled spread are undoubtedly commendable. Yet, such worst-case scenarios must not blind politicians or analysts to the more immediate changes this technology will bring to business, politics, and everyday life.

This, of course, includes the way foreign policy is conducted. Here, policymakers need to act swiftly, and profoundly, as this analysis shows. Following a brief outline of the changes AI will bring (1), the article details how the logic of this technology will inherently change a people’s trade like diplomacy (2). Finally, some ideas about how AI could be used for better (foreign) policies can provide inspiration for how to steer through the coming convulsions (3).

 

1. AI will bring fundamental change not seen in 500 years

The spread of AI will not only revolutionize business and society, but also human affairs more generally – including what it means to be an intelligible being. The technology will soon be ubiquitous, though not always visible, helping humans to explore, and make sense of, reality. The disruptive change it will bring is more comparable to the invention of the printing press, heralding the age of Enlightenment, than to the Industrial Revolution, which took off much later.

Still, if one wanted to use an image from that latter period to illustrate the case, one could compare the development of AI to Carl Benz miraculously inventing a powerful modern, 300hp V8 engine that he would latch onto his carriage-like vehicle. It would take years, if not decades, to build the actual car around that motor so that it could be safely used. Meanwhile, however, the engine itself would have helped its inventor to develop a successor model that would be even more potent by a multitude – as every self-learning AI would.

Policymakers across the globe, therefore, must come to terms with – and get a grip on – a mighty technology transcending the boundaries of state affairs as we know them. As much as tackling the challenges from actual threats from AI – from enhanced cyber-attacks to the development of potent biological weapons to, well, human extinction, as some would say –, it’s the fundamental changes in human comprehension and activity, in particular human-machine interaction, that will shape the future of foreign policymaking. Or, as the late Henry Kissinger and his co-authors put it in their 2021 book on The Age of AI, this is a human consciousness-transforming technology “in need of a guiding philosophy”.

 

Read also: The European AI act: mankind versus its own brainchild

 

Already, the use of AI has been growing not just in companies and on smartphones, but also in the world’s governments. However, while most firms can re-invent their ways around profit-making business models, and most customers will happily adopt new – and smarter – apps, the administration of public affairs is, by nature, less agile. For many domestic agencies, the challenges lie in existing legal provisions (including privacy regulations), a lack of tech infrastructure (or the funds to pay for it), and in a clerical professional culture. The art of diplomacy, in turn, is presented with another hurdle: It is an often-opaque business that, at times, depends on a state’s material or immaterial propensities (its economic or military assets brought to bear, its long-standing alliances, or its image and reputation); at other times on personal encounters; or, at other times still, on the right strategy.

None of this is particularly conducive to running on AI. That said, it will be the technology behind AI, and the fact that it is mostly in the hands of private actors, that will force a fundamental rethink to the art of diplomacy. This is because deconstructing strategies and policies into definable and executable parts and doing so in collaboration with tech experts from the corporate sector, will alter current practices – whether diplomats like it or not.

 

2. Applying a tech logic to a people’s business

It is not just the geopolitics…

Protecting and promoting key technologies and vital industries is now commonplace, including in Western – meaning the previously pro-free trade – economies. The main driver is competition with China, including and specifically in AI, and with it the desire to ensure the survival, if not dominance, of open societies and liberalism. While Russia is not a threat in AI per se, it is the deepening cooperation with Beijing in military, political, and economic affairs, including Russia’s influence – not least through disinformation – in major parts of the world, that poses a challenge. In fact, the current volatile phase of emerging AI powers is particularly critical, as powerful, but not dominant countries that are falling behind may strike before they fear it is too late.

Consequently, civilian AI innovation is increasingly entangled with national security priorities. This extends to foreign ministries where civilians (not military types) devise strategies to fend off, diplomatically, emerging security threats. If the nuclear arms race was bad, and the existing guardrails are weakening again, it is the entirely different logic – and cost structure – of AI being used as a weapon that makes it so dangerous. More likely than not, powerful AI applications that could be applied for both civilian and military purposes (dual use) will be easily available – and concealable – for many more countries around the world than the complex, clunky, and hugely expensive nuclear weapons ever were. Therefore, diplomats are now working on the non-proliferation, not just of nukes (as they have done for decades), but also of the tiniest of semiconductors that are crucial for the ever more potent AI applications.

While these geopolitical shifts are something to watch and guard against, policymakers and strategists also need to consider how this technology alters the way of working in their field. That is because the process of using AI requires the definition, or at least the making explicit, of certain foreign policy assumptions that have so far been rather implicit.

 

…but also, the tech logic…

True, strategies are regularly formulated in the world’s ministries and chanceries, and they give some guidance on, and clarity about, such basic assumptions. However, their language is often woolly, precisely to avoid a commitment to a specific course of action. Moreover, they are not followed mechanically, and certainly not in times of crisis which invite improvisation.

“Winging it”, however, will not be possible with AI at your side. This is because for AI-powered applications to be useful tools for foreign policymakers, both the input (problem definition, goals, instruments etc.) and the process (machine learning and calculations) need to be spelled out beforehand. Furthermore, the procedures of what to do with the results demands prior agreement: Even when considering ultimate human control over policy, a “delegation bias” will likely lead to the AI-generated recommendation to be judged as superior, if only to avoid being wrong on one’s own advice.

As I have experienced myself when co-creating an online toolkit explaining “Democracy by Design” to employees of tech start-ups, the translation of political concepts into an application requires hard thinking about how to express what, in which technical terms. Rather than explaining something from one human being to another (which is what diplomats often do), information, policies, and strategies will have to be mediated by computerized processes. These, in turn, are programmed by people who are not at all in the foreign policy know but think in terms of actionable inputs and outputs.

 

Read also: The continuing technology revolution: innovation, rules and competition

 

Thus, to make use of the wonderful world of AI tools in front of them, foreign policymakers will soon have to clearly define certain inputs (like strategies, statistics, real-time data, meeting minutes, and the like) and how they should be processed to achieve any useful output. In other words, the “secret sauce” of diplomacy will have to be put into recipe form.

Beyond the analytical dissection of long-running policy paradigms, two additional preparati­ons are required in this likely and not-too-distant future: to train and employ the right people and to re-assess decision-making powers. For one, every foreign (as well as defense, deve­lopment, trade etc.) ministry needs its personnel to have different levels of AI knowledge: those who can create useful AI applications; those who can ably apply them; and those who can reflect on the institution’s use of them. This requires new training and hiring procedures, both with a short- and long-term view. For another, government agencies may have to come to terms with not knowing how the “black box of AI” produced its results. The question of how to accept the latter if there is no transparency on which options the machine has consi­dered, will be hard – but crucial – to answer, both at an institutional and individual level.

 

…as well as the corporate contribution.

Moreover, the reliance on corporations will have profound implications. Today, much of foreign policymaking by officials is done with on-board means: personnel working from public buildings with proprietary equipment like encrypted communication tools (the frequent use of WhatsApp being the exception to the rule). Of course, governments have for decades bought computers and software for worldwide use in ministries and embassies, but such helpful tools have merely facilitated the way diplomacy is conducted (like cars took over carriages, if you will). They did not fundamentally alter it the way AI is certain to do.

In fact, for a sovereign to rely on specific custom-made AI models developed by companies requires a much greater amount of trust in, and control of, technology. In their own interests, governments will have to make sure that the products they purchase not only achieve the desired results but do so on sound legal and ethical grounds (think of copyright violations, in-built biases etc.). That is obviously very relevant in domestic policy areas concerning welfare benefits or legal proceedings, where governments are likely to face lawsuits over the use of AI, but it is equally urgent in foreign policy.

 

3. Imagine AI… for people, processes, and policies

If all this sounds too depressing, like much of what foreign policy wonks write on AI, and other than the many tech enthusiasts out there, it may be worthwhile to imagine how this technology could be used, in a good scenario. Let us look at people, processes, and policies.

Diplomacy (still) being a people’s business, I imagine a Facebook for foreign affairs, filled by diplomats with all their contacts and managed by an algorithm that updates it with information from the public domain. This tool, ideally shared between like-minded countries, would allow officials to get in touch with the right counterpart, in whichever country or institution, on whatever subject matter, to establish direct communication.

International negotiations, in turn, could be boosted by having AI tools draft texts and compromise proposals. In addition, AI-powered simultaneous interpretation would enable talks to be conducted in any language, so that badly spoken ‘international English’ is no longer needed. That should not lead diplomats to learn fewer languages, though, precisely because AI-driven diplomacy does not make the cultural knowledge and understanding of inter-personal relations that come with an immersion into different societies less significant.

Moreover, to enlarge the personnel pool, AI should enhance recruitment processes, just as the ministries themselves should devise scholarship programs for non-national AI experts to boost their own country’s tech expertise. Then again, at the very practical level, AI could support much-maligned visa processes at embassies, streamlining both the application and appointment process and, ultimately, increasing people-to-people exchanges.

On processes, I can see data-driven horizon-scanning being widely employed. Informed by actual events (such as political, social, kinetic, meteorological) and guided by predetermined parameters, such AI-enhanced technology would search for observable patterns in real life. With officials following up to verify and substantiate the tool’s findings, this would help match interests and preferences with on-the-ground information to produce concrete instru­ctions. In the long run, it may even be possible to predict and, thus, mitigate natural disasters or social upheaval, including by warning those countries that do not possess such tools.

On policymaking, I would dream of a custom-built application based on large language models that allows the (dis)aggregation of policies into (or from) their integral parts (defined goals, operational measures, available resources and personnel, etc.). This way, submissions or diplomatic cables or, indeed, strategies can be defined around certain, clearly recognizable factors. On actual policies, I would wish for the countries leading on AI to reach out to the others to draft the next set of rules for an AI-enhanced international era. As part of establishing a new global commons, this should include the setting up of an AI-enabled database for the “history of humankind”: Forgery-proof thanks to blockchain technology, this archive would help preserve not just official documents (for example, at UN level), but also major historical records which might soon be drowned out by AI-created deepfakes.

There certainly are more and better ideas out there, as foreign ministries, development agencies, intelligence bodies, and other government actors try to make use of AI. Still, after dreaming big and bold, things need to be boiled down to deliverables. Only then can tech contribute to improving foreign policymaking, and possibly avert any risk of extinction.

 

 

societypoliticsworldtechnologyforeign policysecuritydigitalAIcorporate world