En cours au Siège de l'ONU

SG/SM/21880

Intelligence artificielle: M. Guterres exhorte le Conseil de sécurité à exercer son leadership et à promouvoir la transparence, la redevabilité et la surveillance des systèmes

On trouvera, ci-après, le texte de la déclaration du Secrétaire général de l’ONU, M. António Guterres, au débat du Conseil de sécurité sur l’intelligence artificielle, à New York, aujourd’hui:

I thank the United Kingdom for convening the first debate on Artificial Intelligence ever held in this Council. 

I have been following the development of AI for some time.  Indeed, I told the General Assembly six years ago that AI would have a dramatic impact on sustainable development, the world of work, and the social fabric.  But like everyone here, I have been shocked and impressed by the newest form of AI, generative AI, which is a radical advance in its capabilities. 

The speed and reach of this new technology in all its forms are utterly unprecedented.  It has been compared to the introduction of the printing press.  But while it took more than fifty years for printed books to become widely available across Europe, ChatGPT reached 100 million users in just two months. 

The finance industry estimates AI could contribute between $10 and $15 trillion US dollars to the global economy by 2030.  Almost every government, large company and organization in the world is working on an AI strategy. 

But even its own designers have no idea where their stunning technological breakthrough may lead.  It is clear that AI will have an impact on every area of our lives – including the three pillars of the United Nations.  It has the potential to turbocharge global development, from monitoring the climate crisis to breakthroughs in medical research.  It offers new potential to realize human rights, particularly to health and education.  But the High Commissioner for Human Rights has expressed alarm over evidence that AI can amplify bias, reinforce discrimination, and enable new levels of authoritarian surveillance. 

Today’s debate is an opportunity to consider the impact of Artificial Intelligence on peace and security – where it is already raising political, legal, ethical, and humanitarian concerns.  I urge the Council to approach this technology with a sense of urgency, a global lens, and a learner’s mindset.  Because what we have seen is just the beginning.  Never again will technological innovation move as slow as it is moving today. 

AI is being put to work in connection with peace and security, including by the United Nations.  It is increasingly being used to identify patterns of violence, monitor ceasefires and more, helping to strengthen our peacekeeping, mediation and humanitarian efforts. 

But AI tools can also be used by those with malicious intent.  AI models can help people to harm themselves and each other, at massive scale.  Let’s be clear:  The malicious use of AI systems for terrorist, criminal or state purposes could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale. 

AI-enabled cyberattacks are already targeting critical infrastructure and our own peacekeeping and humanitarian operations, causing great human suffering.  The technical and financial barriers to access are low – including for criminals and terrorists.  Both military and non-military applications of AI could have very serious consequences for global peace and security. 

The advent of generative AI could be a defining moment for disinformation and hate speech – undermining truth, facts, and safety; adding a new dimension to the manipulation of human behaviour; and contributing to polarization and instability on a vast scale. 

Deepfakes are just one new AI-enabled tool that, if unchecked, could have serious implications for peace and stability.  And the unforeseen consequences of some AI-enabled systems could create security risks by accident.  Look no further than social media.  Tools and platforms that were designed to enhance human connection are now used to undermine elections, spread conspiracy theories, and incite hatred and violence. 

Malfunctioning AI systems are another huge area of concern.  And the interaction between AI and nuclear weapons, biotechnology, neurotechnology, and robotics is deeply alarming.  Generative AI has enormous potential for good and evil at scale.  Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead.  Without action to address these risks, we are derelict in our responsibilities to present and future generations. 

La communauté internationale a une longue histoire de réponses aux nouvelles technologies susceptibles de déstabiliser nos sociétés et nos économies. Nous avons joint nos efforts au sein de l’ONU pour établir de nouvelles règles internationales, signer de nouveaux traités et créer de nouveaux organismes mondiaux.  Si de nombreux pays ont préconisé différentes mesures et initiatives relatives à la gouvernance de l’intelligence artificielle, une approche universelle est nécessaire. 

Et les questions de gouvernance seront complexes à divers égards: Premièrement, certains modèles puissants d’intelligence artificielle sont d’ores et déjà largement accessibles au grand public.  Deuxièmement, et contrairement aux matières nucléaires et aux agents chimiques et biologiques, les outils d’IA peuvent être expédiés partout dans le monde en laissant très peu de traces.  Et troisièmement, le rôle de premier plan joué par le secteur privé dans le domaine de l’IA a peu d’équivalents dans d’autres technologies stratégiques. 

Mais nous avons déjà des points de départ.  Par exemple, les principes directeurs de 2018-2019 sur les systèmes d’armes létaux autonomes, adoptés dans le cadre de la Convention sur certaines armes classiques.  Je suis d’accord avec les très nombreux experts qui ont recommandé l’interdiction des armes autonomes létales utilisées sans contrôle humain. 

Nous disposons aussi des Recommandations sur l’éthique de l’intelligence artificielle adoptées par l’UNESCO en 2021. 

Le Bureau de lutte contre le terrorisme, en collaboration avec l’Institut interrégional de recherche des Nations Unies sur la criminalité et la justice, a quant à lui formulé des recommandations sur la manière dont les États Membres peuvent lutter contre l’utilisation potentielle de l’IA à des fins terroristes. 

Et les sommets « AI for Good » de l’Union internationale des télécommunications ont rassemblé des experts, le secteur privé, des institutions des Nations unies et des gouvernements autour d’efforts visant à garantir que l’IA serve le bien commun. 

The best approach would address existing challenges while also creating the capacity to monitor and respond to future risks.  It should be flexible and adaptable, and consider technical, social and legal questions.  It should integrate the private sector, civil society, independent scientists and all those driving AI innovation. 

The need for global standards and approaches makes the United Nations the ideal place for this to happen.  The Charter’s emphasis on protecting succeeding generations gives us a clear mandate to bring all stakeholders together around the collective mitigation of long-term global risks.  AI poses just such a risk. 

I therefore welcome calls from some Member States for the creation of a new United Nations entity to support collective efforts to govern this extraordinary technology, inspired by such models as the International Atomic Energy Agency, the International Civil Aviation Organization, or the Intergovernmental Panel on Climate Change. 

The overarching goal of this body would be to support countries to maximize the benefits of AI for good, to mitigate existing and potential risks, and to establish and administer internationally-agreed mechanisms of monitoring and governance. 

Let’s be honest: There is a huge skills gap around AI in governments and other administrative and security structures that must be addressed at the national and global levels. 

A new UN entity would gather expertise and put it at the disposal of the international community.  And it could support collaboration on the research and development of AI tools to accelerate sustainable development. 

As a first step, I am convening a multistakeholder High-Level Advisory Board for Artificial Intelligence that will report back on the options for global AI governance, by the end of this year. 

My upcoming Policy Brief on A New Agenda for Peace will also make recommendations on AI governance to Member States: 

First, it will recommend that Member States develop national strategies on the responsible design, development and use of AI, consistent with their obligations under International Humanitarian Law and Human Rights Law. 

Second, it will call on Member States to engage in a multilateral process to develop norms, rules and principles around military applications of AI, while ensuring the engagement of other relevant stakeholders. 

Third, it will call on Member States to agree on a global framework to regulate and strengthen oversight mechanisms for the use of data-driven technology, including artificial intelligence, for counter-terrorism purposes. 

The Policy Brief on a New Agenda for Peace will also call for negotiations to be concluded by 2026 on a legally-binding instrument to prohibit lethal autonomous weapons systems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law. 

I hope Member States will debate these options and decide on the best course of action to establish the AI governance mechanisms that are so urgently needed. 

In addition to the recommendations of the New Agenda for Peace, I urge agreement on the general principle that human agency and control are essential for nuclear weapons and should never be withdrawn.  The Summit of the Future next year will be an ideal opportunity for decisions on many of these inter-related issues. 

I urge this Council to exercise leadership on Artificial Intelligence and show the way towards common measures for the transparency, accountability, and oversight of AI systems.  We must work together for AI that bridges social, digital, and economic divides, not one that pushes us further apart. 

I urge you to join forces and build trust for peace and security.  We need a race to develop AI for good.  To develop AI that is reliable and safe and that can end poverty, banish hunger, cure cancer, and supercharge climate action; AI that propels us towards the Sustainable Development Goals.  That is the race we need, and that is a race that is possible and achievable. 

À l’intention des organes d’information. Document non officiel.