Inclusive Information Ecosystem | Information Integrity on Digital Platforms

While digital platforms and new technologies such as artificial intelligence (AI) have transformed our social, cultural, and political interactions by connecting, informing, and engaging people, they have also enabled the spread of lies and hate through mis- and disinformation, which can lead to violence and death and jeopardize democratic institutions and human rights. Some platforms have even faced criticism for their role in conflicts, including the war in Ukraine.

In Our Common Agenda (OCA), which paints a vision for the future of international cooperation, the Secretary-General called for “empirically backed consensus around facts, science and knowledge.” A policy brief titled, Information Integrity on Digital Platforms, responds to that call by outlining potential principles for a code of conduct “to make the digital space more inclusive and safe for all, while vigorously defending the right to freedom of opinion and expression, and the right to access information.” It is the Secretary-General’s hope that the Code of Conduct for Information Integrity on Digital Platforms being developed in preparation for the Summit of the Future in 2024 will “provide a gold standard for guiding action to strengthen information integrity.”

The publication is the eighth of 11 OCA policy briefs that offer “concrete ideas” to advance work on “multilateral solutions for a better tomorrow.” Other briefs cover:

1) the needs of future generations

2) improving the international response to complex global shocks through an emergency platform

3) more systematic participation by young people in decision-making processes

4) metrics that go beyond gross domestic product (GDP)

5) global digital cooperation on maximizing and sharing the benefits of digital technology through a global digital compact;

6) reform of the global financial architecture;

7) the peaceful, secure, and sustainable use of outer space

8) a New Agenda for Peace

9) reimagining and accelerating progress on education

10) strengthening the capacities of the UN for the 21st century by building a ‘UN 2.0.’

What is information integrity?

The policy brief defines information integrity as the accuracy, consistency and reliability of information. Disinformation, misinformation, and hate speech, it argues, compromise information integrity by “polluting” the information ecosystem and threatening human progress. The brief distinguishes misinformation, which it defines as unintentional spread of inaccurate information, from disinformation, which it treats as false information that is disseminated intentionally to cause serious social harm. It defines hate speech as any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.

What is at stake?

On digital platforms, mis- and disinformation can be created and spread by State and non-state actors alike, spanning multiple contexts, including armed conflict, with implications for all areas of development, including peace and security, human rights, public health, humanitarian aid, and climate action. Many countries, the brief notes, have attempted to regulate digital platforms, with at least 70 such laws adopted or considered in the last four years. Some of these regulatory initiatives have silenced protected speech, infringed human rights, or served as a pretext to restrict access to information, discredit reporting, and target opponents, among other outcomes.

The policy brief flags several serious concerns for the global public posed by online mis- and disinformation and hate speech, with young people being particularly vulnerable. Information pollution carries significant implications for trust, safety, democracy, and sustainable development, it notes. The brief warns that in times of crisis, emergency, and conflict, the effects of mis- and disinformation can be especially devastating, citing examples of the COVID-19 pandemic, the war in Ukraine, and the climate crisis where mis- and disinformation have had profound negative impacts.

What can be done?


According to the policy brief, the Secretary-General will put forward a UN Code of Conduct for Information Integrity on Digital Platforms, underpinned by the following principles, based on the core ideas elaborated in the policy brief:

  • Commitment to information integrity: All stakeholders should refrain from using, supporting, or amplifying disinformation and hate speech for any purpose.

  • Respect for human rights: Member States should ensure that responses to mis- and disinformation and hate speech are consistent with international law and that the fundamental rights of users of digital platforms are protected.

  • Support for independent media: Member States should guarantee a free, viable, independent, and plural media landscape. News media should ensure accurate and ethical independent reporting.

  • Increased transparency: Digital platforms should ensure transparency regarding algorithms, data, content moderation, and advertising, and publicize policies on mis- and disinformation and hate speech. News media should ensure meaningful transparency of funding sources and advertising policies.

  • User empowerment: Member States should ensure public access to accurate, transparent, and credibly sourced government information. Digital platforms should ensure transparent user empowerment and protection, and all stakeholders should invest in robust digital literacy.

  • Strengthened research and data access: Member States should invest in and support independent research on the prevalence and impact of mis- and disinformation and hate speech. Digital platforms should allow researchers and academics access to data while respecting user privacy.

  • Scaled-up responses: All stakeholders should: allocate resources to address and report on the origins, spread, and impact of mis- and disinformation and hate speech; form broad coalitions on information integrity; and promote training and capacity building.

  • Stronger disincentives: Digital platforms should move away from business models that prioritize engagement above human rights, privacy, and safety.

  • Enhanced trust and safety: Digital platforms should ensure safety and privacy by design in all products and invest in human and artificial intelligence content moderation systems.

Way forward

According to the policy brief, the UN Secretariat will conduct stakeholder consultations on the development of the UN Code of Conduct, including mechanisms for follow-up and implementation, which, it states, could include the establishment of an independent observatory made up of recognized experts to assess the measures taken by the actors who commit to the Code of Conduct. The Secretariat may undertake in-depth studies to enhance understanding of information integrity globally, especially in underresearched parts of the world.

The Secretary-General will also establish dedicated capacity in the Secretariat to scale up the response to online mis- and disinformation and hate speech affecting United Nations mandate delivery and substantive priorities, including tailored communication strategies to anticipate and address threats before they translate into harm.