Secretary-General Urges Security Council to Ensure Transparency, Accountability, Oversight, in First Debate on Artificial Intelligence
Following are UN Secretary-General António Guterres’ remarks to the Security Council debate on artificial intelligence, in New York today:
I thank the United Kingdom for convening the first debate on artificial intelligence (AI) ever held in this Council.
I have been following the development of AI for some time. Indeed, I told the General Assembly six years ago that AI would have a dramatic impact on sustainable development, the world of work and the social fabric. But, like everyone here, I have been shocked and impressed by the newest form of AI, generative AI, which is a radical advance in its capabilities.
The speed and reach of this new technology in all its forms are utterly unprecedented. It has been compared to the introduction of the printing press. But, while it took more than 50 years for printed books to become widely available across Europe, ChatGPT reached 100 million users in just two months.
The finance industry estimates AI could contribute between $10 and $15 trillion to the global economy by 2030. Almost every Government, large company and organization in the world is working on an AI strategy.
But, even its own designers have no idea where their stunning technological breakthrough may lead. It is clear that AI will have an impact on every area of our lives, including the three pillars of the United Nations. It has the potential to turbocharge global development, from monitoring the climate crisis to breakthroughs in medical research. It offers new potential to realize human rights, particularly to health and education. But, the High Commissioner for Human Rights has expressed alarm over evidence that AI can amplify bias, reinforce discrimination and enable new levels of authoritarian surveillance.
Today’s debate is an opportunity to consider the impact of artificial intelligence on peace and security — where it is already raising political, legal, ethical and humanitarian concerns. I urge the Council to approach this technology with a sense of urgency, a global lens, and a learner’s mindset. Because what we have seen is just the beginning. Never again will technological innovation move as slow as it is moving today.
AI is being put to work in connection with peace and security, including by the United Nations. It is increasingly being used to identify patterns of violence, monitor ceasefires and more, helping to strengthen our peacekeeping, mediation and humanitarian efforts.
But, AI tools can also be used by those with malicious intent. AI models can help people to harm themselves and each other, at massive scale. Let’s be clear: the malicious use of AI systems for terrorist, criminal or State purposes could cause horrific levels of death and destruction, widespread trauma and deep psychological damage on an unimaginable scale.
AI-enabled cyberattacks are already targeting critical infrastructure and our own peacekeeping and humanitarian operations, causing great human suffering. The technical and financial barriers to access are low, including for criminals and terrorists. Both military and non-military applications of AI could have very serious consequences for global peace and security.
The advent of generative AI could be a defining moment for disinformation and hate speech — undermining truth, facts and safety, adding a new dimension to the manipulation of human behaviour and contributing to polarization and instability on a vast scale.
Deepfakes are just one new AI-enabled tool that, if unchecked, could have serious implications for peace and stability. And the unforeseen consequences of some AI-enabled systems could create security risks by accident. Look no further than social media. Tools and platforms that were designed to enhance human connection are now used to undermine elections, spread conspiracy theories and incite hatred and violence.
Malfunctioning AI systems are another huge area of concern. And the interaction between AI and nuclear weapons, biotechnology, neurotechnology and robotics is deeply alarming. Generative AI has enormous potential for good and evil at scale. Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead. Without action to address these risks, we are derelict in our responsibilities to present and future generations.
The international community has a long history of responding to new technologies with the potential to disrupt our societies and economies. We have come together at the United Nations to set new international rules, sign new treaties and establish new global agencies. While many countries have called for different measures and initiatives around the governance of AI, this requires a universal approach.
And questions of governance will be complex for several reasons. First, powerful AI models are already widely available to the general public. Second, unlike nuclear material and chemical and biological agents, AI tools can be moved around the world leaving very little trace. And third, the private sector’s leading role in AI has few parallels in other strategic technologies.
But, we already have entry points. One is the 2018-2019 guiding principles on lethal autonomous weapons systems, agreed through the Convention on Certain Conventional Weapons. I agree with the large number of experts that have recommended the prohibition of lethal autonomous weapons without human control.
A second is the 2021 recommendations on the Ethics of Artificial Intelligence agreed through the United Nations Educational, Scientific and Cultural Organization (UNESCO).
The Office of Counter-Terrorism, working with the Interregional Crime and Justice Research Institute, has provided recommendations on how Member States can tackle the potential use of AI for terrorist purposes.
And the AI for Good summits hosted by the International Telecommunications Union (ITU) have brought together experts, the private sector, United Nations agencies and Governments around efforts to ensure that AI serves the common good.
The best approach would address existing challenges while also creating the capacity to monitor and respond to future risks. It should be flexible and adaptable, and consider technical, social and legal questions. It should integrate the private sector, civil society, independent scientists and all those driving AI innovation.
The need for global standards and approaches makes the United Nations the ideal place for this to happen. The Charter of the United Nations’ emphasis on protecting succeeding generations gives us a clear mandate to bring all stakeholders together around the collective mitigation of long-term global risks. AI poses just such a risk.
I therefore welcome calls from some Member States for the creation of a new United Nations entity to support collective efforts to govern this extraordinary technology, inspired by such models as the International Atomic Energy Agency (IAEA), the International Civil Aviation Organization (ICAO) or the Intergovernmental Panel on Climate Change.
The overarching goal of this body would be to support countries to maximize the benefits of AI for good, to mitigate existing and potential risks, and to establish and administer internationally-agreed mechanisms of monitoring and governance.
Let’s be honest: There is a huge skills gap around AI in Governments and other administrative and security structures that must be addressed at the national and global levels. A new United Nations entity would gather expertise and put it at the disposal of the international community. And it could support collaboration on the research and development of AI tools to accelerate sustainable development.
As a first step, I am convening a multistakeholder High-Level Advisory Board for Artificial Intelligence that will report back on the options for global AI governance, by the end of this year. My upcoming Policy Brief on A New Agenda for Peace will also make recommendations on AI governance to Member States.
First, it will recommend that Member States develop national strategies on the responsible design, development and use of AI, consistent with their obligations under international humanitarian law and human rights law.
Second, it will call on Member States to engage in a multilateral process to develop norms, rules and principles around military applications of AI, while ensuring the engagement of other relevant stakeholders.
Third, it will call on Member States to agree on a global framework to regulate and strengthen oversight mechanisms for the use of data-driven technology, including artificial intelligence, for counter-terrorism purposes.
The Policy Brief on a New Agenda for Peace will also call for negotiations to be concluded by 2026 on a legally binding instrument to prohibit lethal autonomous weapons systems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law. I hope Member States will debate these options and decide on the best course of action to establish the AI governance mechanisms that are so urgently needed.
In addition to the recommendations of the New Agenda for Peace, I urge agreement on the general principle that human agency and control are essential for nuclear weapons and should never be withdrawn. The Summit of the Future next year will be an ideal opportunity for decisions on many of these inter-related issues.
I urge this Council to exercise leadership on artificial intelligence and show the way towards common measures for the transparency, accountability, and oversight of AI systems. We must work together for AI that bridges social, digital and economic divides, not one that pushes us further apart.
I urge you to join forces and build trust for peace and security. We need a race to develop AI for good: to develop AI that is reliable and safe and that can end poverty, banish hunger, cure cancer and supercharge climate action [and] an AI that propels us towards the Sustainable Development Goals. That is the race we need, and that is a race that is possible and achievable. Thank you.