Security Council Debates Use of Artificial Intelligence in Conflicts, Hears Calls for UN Framework to Avoid Fragmented Governance
Russian Federation Warns Against Imposing West-Led Rules, Norms
Rapidly evolving artificial intelligence (AI) is outpacing human ability to govern it, even threatening human control over weapons systems, the United Nations chief warned during a Security Council briefing today, urging Member States to swiftly establish “international guard-rails” to ensure a safe, secure and inclusive AI future for all.
“Artificial intelligence is not just reshaping our world — it is revolutionizing it,” underscored Secretary-General António Guterres. AI tools are identifying food insecurity and predicting displacements caused by extreme events and climate change, detecting and clearing landmines, and soon will be able to spot patterns of unrest before violence erupts.
However, recent conflicts have become testing grounds for AI military applications, he pointed out, noting that algorithms, from intelligence-based assessments to target selection, have reportedly been used in making life-and-death decisions. “Artificial intelligence without human oversight would leave the world blind — and perhaps nowhere more perilously and recklessly than in global peace and security,” he warned, adding that “deep fakes” could trigger diplomatic crises, incite unrest and undermine the very foundations of societies. The integration of AI with nuclear weapons must be avoided at all costs, he emphasized.
Amid the pressing need for “unprecedented global cooperation” in reducing fragmentation of AI governance, his High-Level Advisory Body on AI has developed a blueprint for addressing both the profound risks and opportunities that AI presents to humanity, he noted, adding: It has also laid “the foundation for a framework that connects existing initiatives — and ensures that every nation can help shape our digital future”.
Member States should move swiftly in establishing the International Scientific Panel on AI and launching the Global Dialogue on AI Governance within the United Nations, as set forth in the UN Global Digital Compact. “We must never allow AI to stand for ‘Advancing Inequality’,” he added, underscoring the need to support developing countries in building AI capabilities. “Members of this Council must lead by example and ensure that competition over emerging technologies does not destabilize international peace and security,” he urged.
Fei-Fei Li, Sequoia Professor in the Computer Science Department at Stanford University, Co-Director of Stanford’s Human-Centered AI Institute, and Member of the Secretary-General’s Scientific Advisory Board, via videoconference, spotlighted new technology, called Spatial Intelligence, which allows AI systems to perceive and interact with the 3D virtual and physical world. “This work has illuminated further promises of this technology, bringing us to some of the most exciting frontiers of innovation,” she said, citing examples, such as robots that navigate disaster zones to save lives, precision agriculture systems that address food insecurity, and advanced medical imaging tools that improve healthcare outcomes.
“Yet, we must also remain vigilant,” she warned, spotlighting AI’s ability to harm. Member States must act with urgency and unity to ensure that “AI serves humanity rather than undermining it” and that “everyone has equitable access to AI tools”.
A multilateral AI research institute — a network of research hubs bringing together experts from across disciplines and pooling resources across nations — would advance tech innovation and set global norms for responsible AI development and deployment, she said. Governments must foster public sector leadership, champion global collaboration, and advance evidence-based policymaking, she said, and in doing so, “we can unlock AI’s transformative potential while safeguarding its responsible development”.
Also briefing the Council was Yann LeCun, Chief AI Scientist, Meta, and Jacob T. Schwartz, Professor of Computer Science, Data Science, Neural Science and Electrical and Computer Engineering at New York University, who said: “There is no question that, at some point in the future, AI systems will match and surpass human intellectual capabilities.” By amplifying human intelligence, AI may bring, not just a new industrial revolution, but “a new period of enlightenment for humanity”, contributing towards the maintenance of international peace and security by “supercharging the diffusion of knowledge and powering global economic growth”.
“Governments and the private sector must work together to ensure this global network of infrastructure exists to support AI development in a way that enables people all over the world to participate in the creation of a common resource,” he said. International cooperation must focus on two initiatives: collecting cultural material, providing AI-focused supercomputers in multiple regions around the world and establishing a modus operandi for the distributed training of a free and open universal foundation model; and unifying the regulatory landscape, so that the development and deployment of open-source foundation models is not hindered.
Regarding Governments’ concerns about a handful of companies controlling the “digital diet of their citizens”, he said Meta has taken a leading role in producing and distributing free and open-source foundation models. About AI-generated disinformation, he said: “There is no evidence that current forms of AI present any existential risk, or even a significantly increased threats over traditional technology such as search engines and textbooks.”
In the ensuing high-level discussion, Council members underscored the urgent need for coordinated action to prevent the misuse of AI, especially threats to global peace and security, while spotlighting various governance initiatives.
Antony J. Blinken, Secretary of State of the United States, Council President for December, speaking in his national capacity, said that while AI can help achieve 80 per cent of the Sustainable Development Goals, it can also be deployed for destructive and hard-to-trace cyberattacks, and by repressive regimes in targeting journalists. Urging States to condemn and reject its malicious use by any actor, he said his country has been working to set rules around the use of AI and mobilize a collective response. Leading American technology companies have committed to the use of watermarks for AI-generated content, for example, and last month, an international network of artificial intelligence safety institutes was launched to set benchmarks for testing and safety.
Gabriela Sommerfeld, Minister for Foreign Affairs and Human Mobility of Ecuador, highlighted AI’s potential to buttress peacekeeping operations through early warning systems and support for mediation. However, the unbridled development of artificial intelligence, without regulation and respect for human rights, poses risks, including concentrating power, exacerbating geopolitical tensions and weakening democratic processes. Non-State actors can misuse AI in recruitment, coordination and incitement, she added, calling for a coordinated effort to govern artificial technologies. Setting up an international panel on the “state of play” for AI, like the Intergovernmental Panel on Climate Change, is an “interesting” option, she said.
Verónica Nataniel Macamo Dlhovo, Minister for Foreign Affairs and Cooperation of Mozambique, pointed out that “while the electrical power grid took 50 years to reach 100 million users, recent AI applications, like ChatGPT, reached the same milestone in just two months in 2022”. The dual-use nature of AI-based technologies points to the urgency of anticipatory governance, she said, stressing that AI must be aligned with the Charter of the UN and the Universal Declaration of Human Rights. The Council has a crucial role in ensuring that AI is a “force for global peace, progress and stability,” she added.
Thomas Gürber, State Secretary of Switzerland, highlighting AI’s major impact on UN diplomacy over the past two years, stressed the need for rules to ensure that AI systems are safe, secure and responsibly managed, and inclusive of all State and non-State stakeholders. Spotlighting AI’s ability to implement the Council’s mandates, he recalled the Switzerland-organized May 2024 Arria-formula meeting to illustrate this potential in peace operations. In collaboration with the Geneva-based DiploFoundation, Switzerland has developed an AI-based tool that facilitated data analysis from ten Council meetings, particularly focusing on the New Agenda for Peace, she added.
Several speakers joined the Secretary-General's repeated calls for adequate regulation on autonomous weapons. Malta’s delegate urged consensus to advance discussions within the Group of Governmental Experts on Lethal Autonomous Weapons in Geneva. Moreover, ethical considerations must be embedded into artificial intelligence development, as called for by the United Nations Educational, Scientific and Cultural Organization’s (UNESCO) AI Ethics Declaration.
France’s representative said it will host the Artificial Intelligence Action Summit on 10 and 11 February 2025, to form “a common bedrock for governance”. He urged the Council to improve its consideration of artificial intelligence, including in the monitoring of the implementation of sanctions regimes.
The United Kingdom’s delegate said his country is an inaugural signatory to the Council of Europe’s Framework Convention on Artificial Intelligence and human rights, democracy and the rule of law — the first-ever international legally binding treaty in the field. He also highlighted London’s £58 million contribution for capacity building, under the aegis of the AI for Development initiative, to help six African countries conduct AI research.
Guyana’s delegate was among speakers who voiced concern about the use of artificial intelligence in the Occupied Palestinian Territory, noting that AI weapons are also being programmed and authorized to select their targets without further human authorization. Slovenia’s representative, adding to that, urged the Security Council to address AI-related risks to ensure its compliance with international law. “These risks are not speculative or distant; they are a reality in contemporary conflicts,” she emphasized.
The Republic of Korea’s representative said the Blueprint for Action from the Summit on Responsible Artificial Intelligence in the Military Domain in Seoul, which his country co-hosted with the Netherlands in September, lays out key principles and can serve as a valuable stepping stone for the international community to achieve the responsible military use of AI. His counterpart from Japan said appropriate measures should be implemented throughout the lifecycle of military AI capabilities. In the non-military domain, interoperability among different AI governance networks must be ensured.
The Russian Federation’s delegate said that while the United States’ ambitions to consider the fate of humanity in the age of rapid AI development are understandable, it is equally important to recognize a key lesson from history. Attempts to impose rules on others while exempting themselves from those same rules risk repeating past mistakes and undermining the path toward genuine global cooperation. His country’s national AI strategy foresees provision of technical assistance to the Global South and East, he said, noting that AI system algorithms must be based on “cultural and national specifics of each civilization”. The Security Council is not an appropriate format for considering AI as the Summit of the Future already outlined the contours of infrastructure for considering this subject in the UN system.
Sierra Leone’s delegate warned against artificial intelligence-enabled disinformation campaigns, which could destabilize fragile social fabrics and undermine democratic processes in Africa.
“The time has come for a binding framework that prevents the misuse of military AI”, underscored Algeria’s representative, pointing out that a growing AI divide “is not about machines and algorithms — it is about sovereignty itself”, as AI-powered, border-proof attacks can damage societies and “manipulated information can poison minds”. Africa's Continental Artificial Intelligence Strategy and Digital Compact are the continent’s vision for AI for peace, he said, calling for inclusive international mechanisms where developing countries are “equal architects for our shared future”.
China’s representative spotlighted his country’s work on AI governance, including its Code of Ethics for the new generation of AI released in 2021, and called for establishing clear guidelines and enhancing “smart governance” and technological innovation, so that AI will not become the tool for waging wars and pursuing hegemony. “AI technology is not a cake for a small group of people”, he emphasized, stressing that the UN should become the main channel for global AI governance.
NEW – Follow real-time meetings coverage on our LIVE blog.