In progress at UNHQ

SG/SM/21832

Secretary-General Urges Broad Engagement from All Stakeholders towards United Nations Code of Conduct for Information Integrity on Digital Platforms

Following is UN Secretary-General António Guterres press briefing on information integrity on digital platforms, in New York today:

New technology is moving at warp speed, and so are the threats that come with it.

Alarm bells over the latest form of artificial intelligence (AI) — generative AI — are deafening, and they are loudest from the developers who designed it.

These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war.

We must take those warnings seriously. Our proposed Global Digital Compact, New Agenda for Peace, and Accord on the global governance of AI will offer multilateral solutions based on human rights.

But the advent of generative AI must not distract us from the damage digital technology is already doing to our world.

The proliferation of hate and lies in the digital space is causing grave global harm — now.

It is fueling conflict, death and destruction — now.  It is threatening democracy and human rights — now.  It is undermining public health and climate action — now.

When social media emerged a generation ago, digital platforms were embraced as exciting new ways to connect.

And, indeed, they have supported communities in times of crisis, elevated marginalized voices and helped to mobilize global movements for racial justice and gender equality.

Social media platforms have helped the United Nations to engage people around the world in our pursuit of peace, dignity and human rights on a healthy planet.  But today, this same technology is often a source of fear, not hope.

Digital platforms are being misused to subvert science and spread disinformation and hate to billions of people.

Some of our own United Nations peacekeeping missions and humanitarian aid operations have been targeted, making their work even more dangerous.

This clear and present global threat demands clear and coordinated global action.

Our policy brief on information integrity on digital platforms puts forward a framework for a [concerted] international response.

Its proposals are aimed at creating guardrails to help Governments come together around guidelines that promote facts, while exposing conspiracies and lies and safeguarding freedom of expression and information.

And to help tech companies navigate difficult ethical and legal issues and build business models based on a healthy information ecosystem.

Governments have sometimes resorted to drastic measures — including blanket Internet shutdowns and bans — that lack any legal basis and infringe on human rights.

Around the world, some tech companies have done far too little, too late to prevent their platforms from contributing to violence and hatred.

The recommendations in this brief seek to make the digital space safer and more inclusive while vigorously protecting human rights.

They will inform a United Nations Code of Conduct for Information Integrity on Digital Platforms that we are developing ahead of next year’s Summit of the Future.  The Code of Conduct will be a set of principles that we hope Governments, digital platforms and other stakeholders will implement voluntarily.

The proposals in this policy brief, in preparation for the Code of Conduct, include:

A commitment by Governments, tech companies and other stakeholders to refrain from using, supporting, or amplifying disinformation and hate speech for any purpose.

A pledge by Governments to guarantee a free, viable, independent, and plural media landscape, with strong protections for journalists.

The consistent application of policies and resources by digital platforms around the world to eliminate double standards that allow hate speech and disinformation to flourish in some languages and countries, while they are prevented more effectively in others.

Agreed protocols for a rapid response by Governments and digital platforms when the stakes are highest — in times of conflict and high social tensions.

And a commitment from digital platforms to make sure all products take account of safety, privacy and transparency.

That includes urgent and immediate measures to ensure that all AI applications are safe, secure, responsible and ethical, and comply with human rights obligations.

The brief proposes that tech companies should undertake to move away from damaging business models that prioritize engagement above human rights, privacy, and safety.

It suggests that advertisers — who are deeply implicated in monetizing and spreading damaging content — should take responsibility for the impact of their spending.

It recognizes the need for a fundamental shift in incentive structures.

Disinformation and hate should not generate maximum exposure and massive profits.

The brief suggests that users — including young people who are particularly vulnerable — should have more influence on policy decisions, and it proposes that digital platforms make a commitment to data transparency.

Users should be able to access their own data.  Researchers should have access to the vast quantities of data generated by digital platforms, while respecting user privacy.

I hope this policy brief will be a helpful contribution to discussions ahead of the Summit of the Future.

We are counting on broad engagement and strong contributions from all stakeholders as we work towards a United Nations Code of Conduct for Information Integrity on Digital Platforms.

We don’t have a moment to lose, and I thank you for your attention and for your presence.

Question:  Thank you, Secretary-General, for this press conference, on behalf of the United Nations Correspondents Association.  Valeria Robecco from ANSA newswire.  So my question is, how confident are you that tech companies and Governments will take concrete steps to make the digital space safer and more inclusive?  And how long could it take to see concrete results?  And if I may, can I also ask you for a comment on the death of the former Italian Prime Minister, Silvio Berlusconi?  Thank you so much.

Secretary-General:  In relation to the death of the former Prime Minister, I can only express my condolences to the family, naturally, and to the Italian Government and to the Italian people.

Now, how confident I am — that’s a question I ask myself.  We are dealing with a business that generates massive profits.  And we are dealing also, in some situations, with Governments that do not entirely respect human rights.  So, this is a constant battle.  And in this constant battle, we must mobilize all those that are committed to information integrity on digital platforms.  And that means mobilizing Governments, mobilizing platforms, mobilizing people, and mobilizing those that advertise on platforms.

And there are many initiatives taking place.  We have in the European Union not only an act but also a code of conduct. This is a very important initiative, even if for the European space.  We have other Governments that have started to look into forms of regulation.  But there is a conscience that regulation is not easy, because things are moving very quickly.  And so, we need to find other mechanisms, including multi-stakeholder approaches, to define guardrails, to define red lines and at the same time to exchange best practices and to make sure that the business models are put into question.  And there is a central aspect:  Of course, these platforms must make money, but the problem is that the present business model prioritizes engagement in relation to privacy, truth, and the human rights of people.

A study by MIT has demonstrated that false information tends to multiply six times more than true information on one of the platforms.  I’m not going to quote which platform it is, but the study was done in relation to one of the platforms.  So, it is important that platforms understand that having naturally a profitable activity, that profitable activity cannot create massive profits at the expense of a model of engagement that goes before any other consideration: human rights, privacy, safety.  So, everybody needs to be engaged.  And this code of conduct that we hope will be published in the movement towards the Summit of the Future is of course not a solution alone in itself, but it will be global, not in relation to any specific part of the world, and it will be, as it is on a voluntary basis, I hope a very strong instrument in order to allow all those interested to commit to what needs to be done in order to guarantee, or at least to seriously promote, information integrity on digital platforms.

Question:  Thank you very much, Secretary-General.  Edith Lederer from the Associated Press.

Secretary-General:  I know. [laughter]

Question:  You and your predecessors have all said that the greatest power of the United Nations is its power to convene.  With artificial intelligence, which you just mentioned yourself, and this letter signed by 350 of the top scientists, including its makers, don’t you think that the United Nations should use its convening power to immediately, in the coming months and in September, when all the world leaders are gathered here, to at least start a discussion when you have the whole world here, on trying to figure out how the world should address both artificial intelligence and at the same time misinformation, disinformation, and hate speech on the internet?

Secretary-General:  Well, we are very committed to do everything possible in this regard.  First of all, we will not be competing for summits.  There is already an indication of a Member State that has announced its intention to convene a summit on artificial intelligence this year, and we will of course not try to create conditions of competition. We will support that initiative.  On the other hand, we believe that a summit must be preceded by serious work.  I am going to appoint, in the next few days, a scientific advisory board that includes a number of experts from outside, including two experts on artificial intelligence and the chief scientists of UN agencies, and namely ITU and UNESCO, that have been very active in this regard.

It is also my intention, immediately after the SDG (Sustainable Development Goals) Summit, and as you know, there is a very strong commitment by Member States to make sure that the SDG Summit is the central summit of our meetings in September, but immediately after, it’s my intention to create a high-level advisory body on artificial intelligence to seriously prepare the different kinds of initiatives that we will be able to take.  So, on the other hand, I would be looking favourably into any initiative of Member States.  This is one of the areas that has already been discussed by several sectors and of course depends on Member States’ initiative, but I would be favourable to the idea that we could have an artificial intelligence, an agency, which I would say is inspired by what the International Agency of Atomic Energy [sic] is today.  So, I do believe that there are a number of things that it is important to move forward. Some of them of course require initiative of Member States; others, we could leave to the different parties.  We will try to be in the centre of all the networks and movements that will be created in order to make these agenda move forward, knowing that it’s not easy to move forward in an agenda in which, and I’ve been saying it many times, that in the last decade, the world has not invested sufficiently in the last decades in the quality of public administrations. Today, we feel how difficult it is for States and for international organizations to compete from the scientific and technical point of view with the platforms that in between have acquired an enormous potential and enormous knowledge, so this is not going to be an easy question.  It also requires the commitment of the platforms themselves and the AI creators themselves, but we will do our best to be a platform where everybody can be together in order to make this agenda advance positively.

Question:  Just as a quick follow up, could you expand on how you would envision this agency like the IAEA for Artificial Intelligence?

Secretary-General:  As I said, this is something that depends on Member States’ will and that only Member States can create it, not the Secretariat of the United Nations.  But what I said is that this has been discussed on different platforms.  This is something I would see positively.  What is the advantage of the IAEA is that it is a very solid, knowledge-based institution and at the same time, even if limited, it has some regulatory functions. So, I believe this is a model that could be very interesting.

Question:  Hello, Linda Fasulo, NPR.  My question is, you mentioned of course business and countries that have closed down Internet services.  So, you have big democracies basically trying to make money watching their State, you know, with their stock market prices’ rise.  You have non-democratic societies that are turning down the internet.  How do you see, you know, this approach seems so disparate in how you deal with each side. What do you see as the first steps in terms of what really needs to be done, because you have to obviously take different approaches? So, I was just wondering how that would work?

Secretary-General:  I don’t think that there is a first step.  I think all steps are necessary and the question here is that it is not easy to establish a regulatory framework, like the ones that exist in areas that do not evolve very quickly.  It is practically impossible to establish a solid regulatory framework in which everything is decided forever.  This doesn’t work in something that moves so quickly.  So, we need to have some intergovernmental processes defining some red lines.  To give an example, I have been consistently appealing for the prohibition of autonomous weapons, weapons that are able to kill without human agency.  This has been my appeal.  This is the kind of regulatory framework that can only be done by Member States if they want to have international law in relation to this.

So, there are areas where international law is possible, depending on the will of Member States to establish it. But there are areas where things move so quickly that if you establish a set of norms today, they will probably be outdated tomorrow.  So we need a process — a constant process of intervention of the different stakeholders working together to permanently establish a number of soft law mechanisms, a number of, I would say, norms, codes of conduct and others; and it is in that approach of trying to bring together the different actors that we will be working and this code of conduct that we intend to prepare and that will be issued before the Summit of the Future under my authority, is something in which we hope, with the consultations that will be established before and with its nature, we hope it will be supported and commitments will be made based on it by Governments, by platforms, by advertisers, and by civil society organizations.

Question:  James Bays from Al Jazeera.  Secretary-General, you’ve been speaking about AI for quite a long time and I’m sure you’ve been thinking about it as well.  In your opening statement, you quoted scientists and experts saying that it could be a risk as high as nuclear war.  Some are also, though, saying that it could cure cancer.  So, I would just like your assessment.  What do you think are the challenges and the opportunities of AI?

Secretary-General:  I think the opportunities are immense.  If you look at health, if you look at environments, if you look at education, what AI can produce can be an extremely important factor to make the Sustainable Development Goals a reality.  So, AI has an enormous potential.  But it is clear that AI has a serious problem, which is the possible removal of human agency, and that for me is the central question.  It is absolutely essential that human agency remains present in everything that is built with artificial intelligence.  And the risk is, of course, if that principle is put aside completely.

Now, when I look into these testimonies, I take them seriously.  But as I said in the beginning, the fact that we look at catastrophic consequences might come from artificial intelligence in the future should not distract us from what’s happening today in the digital world that is contributing to people being killed, to human rights being violated, to our privacy being completely destroyed, and for the data that is produced by us to fully escape our control. So, many things are happening today that we need to deal with but at the same time, we need to do everything possible to make sure that the future evolution of AI doesn’t go into the logic that completely abolishes human agency and creates a monster that nobody will be able to control.

Question:  Thank you, Secretary-General, it’s Pamela Falk from CBS News.  Good to see you and hear about this.  My question is about the inputs.  Have you spoken with most of the AI generating digital platforms… and to the DSG (Deputy Secretary-General), as well… How supportive are they of this?  Many of my colleagues have mentioned, or a few, of the Open AI proposal for UN agency to be involved.  How concerned are you in that context that the Governments are now being pulled in to regulate this?  I mean, you were a Prime Minister.  We know that Governments are not where AI principally was developed.  So, are you in any way concerned about government interference in the development in this code?  Thank you.

Secretary-General:  First of all, we have recently, in our Senior Management Group, a meeting in which the CEO of Open AI was, together with several other experts.  And we were seriously discussing these questions.  And it is interesting to see that today, those that are in the front line of the development of AI are the first to say that regulation is needed.  Now, it’s not clear what exactly some of them mean, but it is obvious that there is an understanding that what they are developing has some risks and that those risks need to be contained.  Now, I don’t think this problem can be solved by saying, “You regulate, and we don’t care. We will follow the regulations.”  It’s too complicated for that.  I think both Governments and platforms and developers, scientists and civil society need to be together to find ways in which there is a consensus on how we can move in the right direction.  So, it’s not enough; it would be a disaster to say:  “We don’t want to be regulated.  We want to do it as we like.”  It would also be a disaster to say:  “The government will regulate, and we do what we want.”  No, I think there is a common responsibility here.  This is to be handled with a strong engagement of all parties and there is a lot that needs to be brought together.  I must say, I’ve listened to several scientists, I’ve listened to several people involved in these platforms themselves, and it is clear that there is a lot to be done to reach a common understanding about how we can move forward to the benefit of humankind and avoiding the risks that are already happening.

Question:  Good morning, Secretary-General, it’s Margaret Besheer, Voice of America.  Secretary-General, when do you envision this code of conduct going into effect?  And what’s the reaction been from the Member States you’ve consulted with about it?  And if I could just ask you separately on Sudan, the fighting has resumed, the 24-hour ceasefire has ended, Mr. [Volker] Perthes has been declared persona non grata.  Is there any realistic, effective political role for the United Nations right now in Sudan?

Secretary-General:  Of course there is.  I mean, there is massive engagement of the UN in relation to humanitarian aid in Sudan.  All our agencies are deeply involved.  The Deputy SRSG (Special Representative of the Secretary-General) is in Sudan, coordinating that effort, and at the same time, we will be, as we said since the beginning, supporting an African solution for the problem — supporting the African Union and supporting the IGAD (Intergovernmental Authority on Development) and we will be totally committed to do it.  We don’t intend to have any protagonism, we never did.  We are here to support the African solutions for the Sudanese problem.

Questions:  So, you don’t see a primary role then for the United Nations?

Secretary-General:  Since the beginning, we have said that we are there, that we are part, you know, of the initiative that the African Union recently launched.  There was a mechanism that was created to which we belong and we are there from the beginning to essentially support an African initiative that can lead, hopefully, to a solution of the problem with, of course, our deep condemnation of all the killings and the violence that is taking place.  Coming back to… I’m sorry.

Question:  The first part of it was:  When do you expect this Code of Conduct to go into effect and what’s the reaction been so far?

Secretary-General:  The Code of Conduct — now, we have presented the set of principles.  Based on these set of principles, we will conduct a number of important consultations with Governments, with platforms, with scientists, with civil society.  And it’s my intention to issue the Code of Conduct after all these consultations before the Summit of the Future.  The Code of Conduct is not something to be approved by the General Assembly.  It is something that will be our contribution, and we hope, of course, that that contribution can be useful for all parties:  for Governments, for platforms, for civil society, for scientists, and for those that advertise on platforms and have a very important role to play.

Question:  Thank you, Secretary-General, Michelle Nichols from Reuters.  I need to ask you about Ukraine while we have you.  How concerned are you that Russia will quit the Black Sea Grain Deal next month?  And on the use of drones in Ukraine, the [United States] and others have accused Iran of supplying Russia with drones for use in Ukraine.  Your report on the implementation of resolution 2231 (2015) is due to go to the [Security] Council shortly.  What has that determined regarding the wreckage of drones found in Ukraine?  Where were they made?  Who sent them?  Did you send experts to inspect them?

Secretary-General:  So, first of all, I am concerned, and we are working hard in order to make sure that it will be possible to maintain the Black Sea Initiative, and at the same time that we are able to go on in our work to facilitate Russian exports.  We will be making our report following strictly the norms that were established in resolution for the elaboration of reports.

Question:  Alan Bulkaty from RIA Novosti news agency.  Secretary-General, in your draft Code of Conduct, you’re mentioning the fight with misinformation and disinformation.  What kind of body or framework do you propose to judge where the truth and where the misinformation is?  Who should do that?

Secretary-General:  I think all are needed for that.  It is important that platforms have their own systems that are able to filter disinformation and misinformation.  It is important that the civil society is attentive to this, it is important that users themselves are empowered in this regard, and it is important that Governments establish, as it was the case of the European Union, that some mechanisms that are possible in regulatory frameworks; that’s being difficult.  At least we have a demonstration that they are possible to a certain extent.  So, everybody must be involved, and advertisers have a key role to play in order to make sure that they do not advertise in ways that support the expansion of misinformation.

Question:  I’m Yvonne Murray, Irish media RTE; thank you very much for this briefing.  My question is about disinformation, which is often an accusation leveled against journalists, especially those operating in authoritarian States.  How do you come up with a Code of Conduct that protects journalists without handing another stick to authoritarian Governments to beat journalists with?

Secretary-General:  One of the principles that is in the preparation of the Code of Conduct is the support for independent media, and one of the concerns that this set of principles establishes is that this should not be a pretext to limit freedom of information, and that it’s essential to support and protect journalists in this context. If you look at misinformation and disinformation, let’s be frank:  This is not the product of journalism.  This is the product of people that are interested, for political reasons, for economic reasons, or for other kinds of interests, that are interested in leading the public into a complete misperception of the challenges that we face in today’s world.

Question:  Journalists operate in democratic societies freely and openly, but not in authoritarian ones.  So, how do you address that issue, because authoritarianism is clearly growing?

Secretary-General:  Our position has been very clear:  condemning strongly all violations of human rights of journalists.  And this is an area where we will go on very actively, condemning, and we know that more journalists have been killed last year than ever, in recent times.  I think 40 per cent more than in the previous year and this is just the most radical.  We have imprisonment, we have harassment, we have all those things, and these are very strong areas of our advocacy.  Of course, it’s not in our power to change the nature of Governments and I’m not yet able to select the Governments around the world. [Laughter] I’ve not yet that capacity, and I don’t think I should have.

Question:  Joseph Klein, Canada Free Press.  Thank you, sir, for this briefing.  A number of tech executives, including some AI innovators, have called for a pause in the very rapid development of AI technology before it gets completely out of control, so that scientists and policymakers can assess where this is going and perhaps suggest some guardrails, possibly including human agency.  Do you support such a pause?  And do you see the board that you are setting up, including AI scientists, as helping in that process of educating during such a pause?

Secretary-General:  I think a pause can be an interesting idea, but I don’t think a pause will solve the problem.  Because we all know of situations in which there is a pause and then everything goes on the same.  I think that independently of the fact that the pause being a positive possible idea, we need to make sure that we move forward and move forward as quickly as possible in all the other mechanisms that we have been discussing.

For information media. Not an official record.