International Community Must Urgently Confront New Reality of Generative, Artificial Intelligence, Speakers Stress as Security Council Debates Risks, Rewards
Secretary-General Points to Potentially ‘Defining Moment for Hate Speech, Disinformation’, as Delegates Call for Ethical, Responsible Governance Framework
The international community must urgently confront the new reality of generative and other artificial intelligence (AI), speakers told the Security Council today in its first formal meeting on the subject as the discussion that followed spotlighted the duality of risk and reward inherent in this emerging technology.
António Guterres, Secretary-General of the United Nations, noting that AI has been compared to the printing press, observed that — while it took more than 50 years for printed books to become widely available across Europe — “ChatGPT reached 100 million users in just two months”. Despite its potential to turbocharge global development and realize human rights, AI can amplify bias, reinforce discrimination and enable new levels of authoritarian surveillance.
The advent of generative AI “could be a defining moment for disinformation and hate speech”, he observed and, while questions of governance will be complex for several reasons, the international community already has entry points. The best approach would be to address existing challenges while also creating capacity to respond to future risks, he said, and underlined the need to “work together for AI that bridges social, digital and economic divides — not one that pushes us further apart”.
Jack Clark, Co-founder of Anthropic, noted that, although AI can bring huge benefits, it also poses threats to peace, security and global stability due to its potential for misuse and its unpredictability — two essential qualities of AI systems. For example, while an AI system can improve understanding of biology, it can also be used to construct biological weapons. Further, once developed and deployed, people identify new and unanticipated uses for such systems.
“We cannot leave the development of artificial intelligence solely to private-sector actors,” he underscored, stating that Governments can keep companies accountable — and companies can earn the world’s trust — by developing robust, reliable evaluation systems. Without such investment, the international community runs the risk of handing over the future to a narrow set of private-sector actors, he warned.
Also briefing the Council, Yi Zeng of the Institute of Automation at the Chinese Academy of Sciences pointed out that current AI are information-processing tools that, while seemingly intelligent, are without real understanding. “This is why they, of course, cannot be trusted as responsible agents that can help humans to make decisions,” he emphasized. Both near-term and long-term AI will carry a risk of human extinction simply because “we haven’t found a way to protect ourselves from AI’s utilization of human weakness”, he said.
In the ensuing debate, Council members alternately highlighted the transformative opportunities AI offers for addressing global challenges and the risks it poses — including its potential to intensify conflict through the spread of misinformation and malicious cyberoperations. Many, recognizing the technology’s military applications, underscored the imperative to retain the element of human decision-making in autonomous weapons systems. Members also stressed the need to establish an ethical, responsible framework for international AI governance.
On that, Omran Sharaf, Assistant Minister for Advanced Sciences and Technology of the United Arab Emirates, stated that there is a brief window of opportunity, available now, where key stakeholders are willing to unite and consider the guardrails for this technology. Member States should establish commonly agreed-upon rules “before it is too late”, he stressed, calling for mechanisms to prevent AI tools from promoting hatred, misinformation and disinformation that can fuel extremism and exacerbate conflict.
Ghana’s representative, adding to that, underscored that the international community must “constrain the excesses of individual national ambitions for combative dominance”. Urging the development of frameworks that would govern AI for peaceful purposes, he spotlighted the deployment of that technology by the United Nations Support Mission in Libya (UNSMIL). Used to determine the Libyan people’s reaction to policies, it facilitated improvements in that country’s 2022 Global Peace Index, he noted, while also cautioning against AI’s integration into autonomous weapons systems.
The speaker for Ecuador similarly rejected the militarization of AI and reiterated the risk posed by lethal autonomous weapons. “The robotization of conflict is a great challenge for our disarmament efforts and an existential challenge that this Council ignores at its peril,” he warned. Adding that AI can either contribute to or undermine peace efforts, he emphasized that “our responsibility is to promote and make the most of technological development as a facilitator of peace”.
China’s representative, noting that AI is a double-edged sword, said that whether it is good or evil depends on how mankind uses and regulates it, and how the balance is struck between scientific development and security. AI development must ensure safety, risk-awareness, fairness and inclusivity, he stressed, calling on the international community to put ethics first and ensure that technology always benefits humanity.
James Cleverly, Secretary of State for Foreign, Commonwealth and Development Affairs of the United Kingdom, Council President for July, spoke in his national capacity to point out that AI could enhance or disrupt global strategic stability, challenge fundamental assumptions about defence and deterrence, and pose moral questions about accountability for lethal decisions on the battlefield. But momentous opportunities lie before the international community, he added, observing: “There is a tide in the affairs of men, which, taken at the flood, leads to fortune.”
Briefings
ANTÓNIO GUTERRES, Secretary-General of the United Nations, recalled that he told the General Assembly in 2017 that artificial intelligence (AI) “would have a dramatic impact on sustainable development, the world of work and the social fabric”. Noting that this technology has been compared to the printing press, he observed that — while it took more than 50 years for printed books to become widely available across Europe — “ChatGPT reached 100 million users in just two months”. The finance industry estimates that AI could contribute up to $15 trillion to the global economy by 2030, and almost every Government, large company and organization in the world is working on an AI strategy. AI has the potential to turbocharge global development — from monitoring the climate crisis to breakthroughs in medical research — and it offers new potential to realize human rights, particularly in the areas of health and education.
He pointed out, however, that the High Commissioner for Human Rights has expressed alarm over evidence that AI can amplify bias, reinforce discrimination and enable new levels of authoritarian surveillance. Urging the Council to approach this technology with a sense of urgency, a global lens and a learner’s mindset, he observed: “Never again will technological innovation move as slowly as today.” While AI tools are increasingly being used — including by the United Nations — to identify patterns of violence, monitor ceasefires and help strengthen peacekeeping, mediation and humanitarian efforts, AI models can help people to harm themselves and each other at massive scale. On that, he said that AI-enabled cyberattacks are already targeting critical infrastructure and peacekeeping operations and that the advent of generative AI “could be a defining moment for disinformation and hate speech”. Outlining other potential consequences, he expressed concern over malfunctioning AI systems and the interaction between AI and nuclear weapons, biotechnology, neurotechnology and robotics.
“Without action to address these risks, we are derelict in our responsibilities to present and future generations,” he stressed. Questions of governance will be complex for several reasons: powerful AI models are already widely available; AI tools can be moved around the world leaving very little trace; and the private sector’s leading role in AI has few parallels in other strategic technologies. However, the international community already has entry points, including the 2018-2019 guiding principles on lethal autonomous weapons systems; the 2021 recommendations on the ethics of AI agreed through the United Nations Educational, Scientific and Cultural Organization (UNESCO); recommendations by the United Nations Office of Counter-Terrorism; and the “AI for Good” summits hosted by the International Telecommunication Union (ITU).
The best approach, he went on to say, would address existing challenges while also creating the capacity to monitor and respond to future risks. The need for global standards and approaches makes the United Nations the ideal place for this to happen, and he therefore welcomed calls from some Member States to create a new United Nations entity to support collective efforts to govern this technology. Such an entity would gather expertise and put it at the international community’s disposal and could support collaboration on the research and development of AI tools to expedite sustainable development. Urging the Council to show the way towards common measures for the transparency, accountability and oversight of AI systems, he underlined the need to “work together for AI that bridges social, digital and economic divides — not one that pushes us further apart”.
JACK CLARK, Co-founder, Anthropic, said: “We cannot leave the development of artificial intelligence solely to private sector actors. The Governments of the world must come together, develop State capacity and make the development of powerful AI systems a shared endeavour across all parts of society, rather than one dictated solely by a small number of firms competing with one another in the marketplace.” He recalled that a decade ago the England-based company DeepMind published research that shows how to teach an AI system to play old computer games like Space Invaders. The same techniques used in that research are now being used to create AI systems that can beat military pilots in air fighting stimulations and even design the components of next-generation semiconductors.
Noting that AI models, such as OpenAI, ChatGPT, Google Bard and his own company Anthropic’s Claude are developed by corporate interests, he said that, as private sector actors are the ones that have the sophisticated computers and large pools of data and capital resources to build these systems, they seem likely to continue to define their development. However, while that will bring huge benefits, it also poses potential threats to peace, security and global stability, which emanate from AI’s potential for misuse and its unpredictability — two essential qualities of AI systems. For example, on misuse, he said that an AI system that can help in better understanding biology may also be used to construct biological weapons. On unpredictability, he pointed out that once AI systems are developed and deployed, people identify new uses for them that were unanticipated by their developers or the system itself could later exhibit chaotic or unpredictable behaviour.
“Therefore, we should think very carefully about how to ensure developers of these systems are accountable, so that they build and deploy safe and reliable systems which do not compromise global security,” he urged. AI as a form of human labour affords immense political leverage and influence, he pointed out, raising such questions about how Governments should regulate this power or who should be the actors that can sell those so-called experts. The international community must work on developing ways to test for the systems’ capabilities, misuses and potential safety flaws. For this reason, it has been encouraging to see many countries emphasize the importance of safety testing and evaluation in their various AI policy proposals, he said, naming those of the European Union, China and the United States.
Noting the absence of standards or best practices on how to test these systems for things such as discrimination, misuse or safety, he said Governments can keep companies accountable and companies can earn the world’s trust by developing robust and reliable evaluation systems. Without such an investment, the international community runs the risk of handing over the future to a narrow set of private sector actors, he warned. “If we can rise to the challenge, however, we can reap the benefits of AI as a global community and ensure there is a balance of power between the developers of AI and the citizens of the world,” he said.
YI ZENG, Institute of Automation, Chinese Academy of Sciences, said that there is no doubt that AI is a powerful and enabling technology to push forward global sustainable development. From the peace and security perspective, efforts should focus on using it to identify disinformation and misunderstanding among countries and political bodies. AI should be used for network defences, not attacks. “AI should be used to connect people and cultures, not to disconnect them,” he added. The current AI, including recent generative AI, are information processing tools that seem to be intelligent, while they are without real understandings, and hence not truly intelligent.
“This is why they, of course, cannot be trusted as responsible agents that can help humans to make decisions,” he emphasized. AI should not be used for automating diplomacy tasks, especially foreign negotiations among different countries, since it may use and extend human limitations and weaknesses to create bigger or even catastrophic risks. “AI should never ever pretend to be human,” he said, stressing the need to ensure sufficient, effective and responsible human control for all AI-enabled weapons systems. Both near-term and long-term AI will include risk of human extinctions simply because “we haven’t found a way to protect ourselves from AI’s utilization of human weakness”. AI does not “know what we mean by human — [by] death and life”.
“In the long term, we haven’t given superintelligence any practical reasons why they should protect humans,” he continued. Proposing the Council consider the possibility of creating a working group on AI for peace and security, he encouraged members to play an increasing role on this important issue. “Humans should always maintain and be responsible for final decision-making on the use of nuclear weapons,” he emphasized. The United Nations must play a central role to set up a framework on AI development and governance, to ensure global peace and security.
Statements
JAMES CLEVERLY, Secretary of State for Foreign, Commonwealth and Development Affairs of the United Kingdom, Council President for July, spoke in his national capacity to note that AI may help the world adapt to climate change, beat corruption, revolutionize education, deliver the Sustainable Development Goals and reduce violent conflict. “But we are here today because AI will affect the work of the Council,” he observed, pointing out that the technology could enhance or disrupt global strategic stability, challenge fundamental assumptions about defence and deterrence and pose moral questions about accountability for lethal decisions on the battlefield. Further, AI changes the speed, scale and spread of disinformation — with hugely harmful consequences for democracy and stability — and could aid the reckless quest for weapons of mass destruction by State and non-State actors.
“That’s why we urgently need to shape the global governance of transformative technologies,” he underscored. For the United Kingdom, AI should: support freedom and democracy; be consistent with the rule of law and human rights; be safe and predictable by design; and be trusted by the public. Noting that his country is home to many of the world’s trail-blazing AI developers and foremost AI safety researchers, he said that the United Kingdom will bring world leaders together for the first major global summit on AI safety in autumn. Momentous opportunities lie before the international community, he added, observing: “There is a tide in the affairs of men, which, taken at the flood, leads to fortune”.
TAKEI SHUNSUKE, State Minister of Foreign Affairs for Japan, underscored the importance of human-centric and trustworthy AI, noting that the development of AI should be consistent with democratic values and fundamental human rights. “AI should not be a tool for rulers but should be placed under the rule of law,” he said, stressing that military use of AI should be responsible, transparent and based on international law. AI can be made more trustworthy by including a wide range of stakeholders in the process, he said, noting that the United Nations’ convening power can bring together wisdom from around the world. In June, his country hosted a side event at the United Nations with the Office of Counter-Terrorism and United Nations Interregional Crime and Justice Research Institute and led discussions on the misuse of AI by terrorists. It also launched the Group of Seven (G7) Hiroshima AI Process this year to contribute to the global discussion on generative AI, he added.
MANUEL GONÇALVES, Deputy Minister for Foreign Affairs of Mozambique, said that, in full disclosure, his statement was composed solely by humans and not by generative AI tools like ChatGPT. “We are approaching a point where digital machines can now complete a task that for the majority of human existence was exclusively within the realm of human intelligence,” he continued. While advancements in AI present immense opportunities, they also pose risks, including the potential of catastrophic outcomes. “We should take precautions,” he urged, warning that AI is increasingly imitating humans to spread misinformation and conspiracies and carries out numerous other nefarious activities.
Turning to AI’s positive impact, he said AI technologies have the potential to transform society — helping to eradicate disease, combat climate change, enhance early warning capabilities and customize mediation efforts. AI can also be used to enhance data for the benefit of humanity. Mozambique recognizes the importance of adopting a balanced approach toward AI, he said, while also noting the “credible evidence” indicating that AI poses a real risk. Therefore, it is crucial to develop an intergovernmental agreement that can govern and monitor the use of AI. It is important to ensure that all relevant actors, including Governments and the private sector, are provided with the technology tools that can ensure the ethical development and use of AI, he stressed.
OMRAN SHARAF, Assistant Minister for Advanced Sciences and Technology of the United Arab Emirates, underlined the need to establish rules for AI, stating that there is a brief window of opportunity available now where key stakeholders are willing to unite and consider the guardrails for this technology. Member States should establish commonly agreed-upon rules “before it is too late”, he stressed, which should include mechanisms to prevent AI tools from promoting hatred, misinformation and disinformation that can fuel extremism and exacerbate conflict. As with other cybertechnologies, the use of AI should be firmly guided by international law, which continues to apply in cyberspace. He also emphasized that AI should become a tool to promote peacebuilding and the de-escalation of conflicts — not a threat multiplier — and that “the biases of the real world should not be replicated by AI”. Adding that flexible and agile regulation is needed, he urged avoiding too-rigid rules that can hamper the evolution of this technology.
ZHANG JUN (China), noting that AI is a double-edged sword, said that whether it is good or evil depends on how mankind utilizes and regulates it and balances scientific development with security. The international community should adhere to putting ethics first and ensure that technology always benefits humanity. AI development must ensure safety, risk-awareness, fairness and inclusiveness, he stressed. Leading technology enterprises should clarify the responsible party and avoid developing risky technology that could pose serious negative consequences. Meanwhile, developing countries must enjoy equal access and use of AI technology, products and services. His country has actively explored AI development and governance in all fields, he said, noting that the Government in 2017 issued the New Generation Artificial Intelligence Development Plan. In recent years it has continuously improved relevant laws and regulations, ethical norms, intellectual property standards, safety monitoring and evaluation measures to ensure the healthy and orderly development of AI.
JEFFREY DELAURENTIS (United States) said that AI offers incredible promise to address global challenges. Automated systems are already helping to grow food more efficiently, predict storm paths and identify disease in patients. AI, however, also has the potential to intensify conflict including by spreading misinformation and carrying out malicious cyberoperations. The United States is committed to working with a range of actors, including Member States, technology companies and civil society actors, he said. On 4 May, President Joseph R. Biden met with leading AI companies to underscore the responsibility to ensure AI systems are safe and trustworthy. The United States is also identifying principles to guide the design, use and deployment of automated systems. Military use of AI must be ethical and responsible. Earlier this year, the United States proposed a political declaration on the responsible military use of AI, he said, and encouraged all Member States to endorse this declaration.
SÉRGIO FRANÇA DANESE (Brazil) said artificial intelligence is developing so fast that even the best researchers are unable to assess the full scale of the challenges and benefits that these new technologies can provide. “What we know for sure is that artificial intelligence is not human intelligence,” he said, adding that human oversight is essential to avoid bias and errors. Even though it has been mostly developed as a civilian application, it can be predicted with certainty that AI applications will be extended to the military field and have a relevant impact on peace and security. Recalling the concept of “meaningful human control”, he underscored that humans must remain responsible for decisions on the use of weapons systems. A human element in any autonomous system is essential for the establishment of ethics standards and for full compliance with international humanitarian law. “There is no replacement for human judgment and accountability,” he asserted.
PASCALE CHRISTINE BAERISWYL (Switzerland) echoed the words of the robot “Ameca”, speaking to a journalist at the “AI for Good” conference in Geneva: “I believe it’s only a matter of time before we see thousands of robots like me out there making a difference.” While a challenge due to its speed and apparent omniscience, AI can and must serve peace. “It’s in our hands to ensure that AI makes a difference to the benefit and not the detriment of humanity,” she emphasized, adding: “let’s seize the opportunity to lay the groundwork towards AI for good by working closely with cutting-edge science”. In this regard, the Swiss Federal Institute of Technology Zurich is developing a prototype of an AI-assisted analysis tool for the United Nations Operations and Crisis Centre which could explore AI’s potential for peacekeeping, particularly for the protection of civilians and peacekeepers. Additionally, Switzerland recently launched the “Swiss Call for Trust & Transparency initiative”, where academia, private sector and diplomacy jointly seek practical and rapid solutions to AI-related risks.
HAROLD ADLAI AGYEMAN (Ghana) underscored that the international community must “constrain the excesses of individual national ambitions for combative dominance”, urging the development of frameworks that would govern AI for peaceful purposes. For Ghana, opportunity lies in developing and applying that technology to identify early warning signs of conflict and to define responses that have a higher rate of success. AI can also be applied to peace mediation and negotiation efforts, he said, noting that the deployment of that technology by the United Nations Support Mission in Libya (UNSMIL) to determine the Libyan people’s reaction to policies facilitated improvements in that country’s 2022 Global Peace Index. AI also presents risks — including its integration into autonomous weapons systems – and, on that, he observed: “The history of our experience with mankind’s mastery in atomic manipulation shows that, should such desires persist, it only generates, in equal measure, efforts by other States to cancel the advantage that such a deterrence seeks to create”.
NICOLAS DE RIVIÈRE (France) said AI must be a tool for peace, noting that these technologies can contribute to the safety of the blue helmets, improve the protection of civilians, and facilitate the delivery of humanitarian assistance. However, it also includes risks, he pointed out, noting that AI is liable to heighten cyberthreats and help malicious actors in waging cyberattacks. At the military level, AI must be modified to reflect the nature of conflict, he said, underscoring the need to develop an applicable framework for autonomous lethal weapons. Such a framework can help ensure that future conflicts are conducted in a way that respects international humanitarian law, he added. Affirming his country’s commitment to advancing an ethical and responsible approach for AI, he said that was the aim of the global partnership it launched in 2020, with the European Union and Council of Europe, and which has been working on rules to regulate and support AI development.
HERNÁN PÉREZ LOOSE (Ecuador) said that AI has already developed at “break-neck speed” and will continue to do so. AI can contribute to peacekeeping and peace efforts, or it can undermine them; prevent conflicts and moderate dialogues in complex situations as was the case during the peak of the COVID-19 pandemic. AI can improve the security of peacekeeping camps and convoys by monitoring the situation more effectively. “Our responsibility is to promote and make the most of technological development as a facilitator of peace,” he said. This can be done only by strictly upholding international human rights law and international humanitarian law. Ecuador categorically rejects the militarization of AI and reiterates the risk posed by lethal autonomous weapons. “The robotization of conflict is a great challenge for our disarmament efforts and an existential challenge that this Council ignores at its peril,” he said.
VANESSA FRAZIER (Malta) said that as AI governance and control practices must be developed at a comparable pace for safeguarding international peace and security, the Council must push for strong AI governance and ensure its inclusive, safe and responsible deployment through the sharing of experiences and governmental frameworks. Since 2019, her country has been developing an Ethical AI Framework, aligned with the European Ethics Guidelines for Trustworthy AI, she said, further describing Malta’s efforts in the field. She voiced concern about the use of AI systems in military operations, stressing that machines cannot make human-like decisions involving the legal principles of distinction, proportionality and precaution. Moreover, lethal autonomous weapons systems currently exploiting AI should be banned and only those weapons systems that are in full respect of international humanitarian law and human rights law should be regulated, she added.
LILLY STELLA NGYEMA NDONG (Gabon) said that AI is increasing the analytical capacity of early warning systems, thereby making it easier to detect emerging threats by analysing vast quantities of data from various sources very quickly. This has enabled United Nations peacekeeping missions to perform better, particularly in the area of civilian protection. AI has also contributed to States’ post-conflict reconstruction efforts, along with fostering the implementation of quick-impact projects, employment opportunities for youth and the reintegration of former combatants. She underscored, however, that local communities must take ownership of and absorb these new technologies “to perpetuate their beneficial effects after the withdrawal of international forces” — lest such benefits disappear, and crises resurface. Also stressing the need to bolster transparency, international governance and accountability regarding AI, she called on the United Nations to expand international cooperation to develop a regulatory framework with appropriate control mechanisms and robust security systems.
FERIT HOXHA (Albania) said AI holds great promise to transform the world like never before, but also poses potential risks that could impact people’s safety, privacy, economy and security. Some countries continually attempt to deliberately mislead people, distort facts, and interfere in democratic processes of others by misusing digital technologies, he said, underscoring the urgency of establishing the necessary AI safeguards and governance frameworks at the national and international levels. Clear lines of responsibility and authority are also needed to ensure that AI systems are used appropriately, safely and responsibly for the good of all. Moreover, AI systems must not infringe on human rights and freedom nor undermine peace and security. The international community must promote standards for responsible State behaviour and the applicability of international law in the use of AI and its technologies, as well as in the monitoring and assessment of risks and implications, he said, highlighting the Council’s role in that regard.
DMITRY A. POLYANSKIY (Russian Federation) said that the development of autonomous weapons systems can pose risks to security because such systems can make decisions about the use of force. AI can also be used in the creation and spread of disinformation and “fake news”, which undermine trust and cause tensions. With respect to lethal autonomous systems, he said that the issue is discussed in the General Assembly and that duplication of such efforts is counterproductive. “The West has no ethical qualms about knowingly allowing AI to generate misanthropic statements in social networks,” he continued. Turning to digital inequality, he said that in Europe Internet access is enjoyed by approximately 89 per cent of the population. In low-income countries, only one quarter of the population enjoys such benefits. Historically, digital technologies were developed at the private level, and Governments lagged in regulating them. “This trend needs to be reversed,” he stressed.