LSE - Small Logo
LSE - Small Logo

Paula Gori

April 21st, 2023

The impact of disinformation on containing climate change. A climate crisis?

0 comments | 9 shares

Estimated reading time: 5 minutes

Paula Gori

April 21st, 2023

The impact of disinformation on containing climate change. A climate crisis?

0 comments | 9 shares

Estimated reading time: 5 minutes

Paula Gori, Secretary General of the European Digital Media Observatory (EDMO), explains the risk posed by disinformation to the fight against climate change, and how it might be addressed by EU policymakers.

The phenomenon originally known as climate change is now starting to be widely called a climate emergency, to highlight the need for rapid action. In parallel, the spread of disinformation around this issue is impacting the collective effort to save the planet. As stated by UN Chief Communicator Melissa Fleming, “climate action is being undermined by bad actors seeking to deflect, distract, and deny efforts to save the planet. Disinformation, spread via social media, is their weapon of choice.” According to the last IPCC report (on page 1931), disinformation resulting in public misperception of climate risks is delaying urgent adaptation planning and implementation. Not by chance, the European Parliament called for the creation of a global code of conduct on disinformation address climate mis- and disinformation based on the model of the IPCC, to provide the basis for a Paris Agreement on Disinformation.

Polluting the online environment, damaging our planet

Disinformation related to climate change comes in various forms. An EDMO investigation identified a number of narratives spread in summer 2022 in the EU including:

  • climate change is not real and/or is not related to human activities;
  • traditional media spread panic through false news and/or manipulated images;
  • renewables, recycling, and electric vehicles are useless or dangerous;
  • the climate movement is hypocritical and/or foolish.

For example, one story detected in the Netherlands claimed that carbon dioxide couldn’t be responsible for climate change, considering its low levels in the atmosphere and the fact that the planet “naturally” releases CO2. Another example is a false story on the temperatures in Sweden, which spread  in a number of EU Member States. Its creators compared two different maps in which two different colors were chosen by different media outlets to show temperature, claiming that there was no increase in average temperatures and that the media were exaggerating things to create panic. However, the maps are attributed to the wrong years and the different colours used by the media outlets meant that it was impossible to draw such a comparison. A final example that demonstrates the different ways of producing disinformation content is the picture of an area full of garbage circulated as showing the aftermath of the speech by Greta Thunberg at the 2022 Glastonbury Festival. The picture is not fake, but is wrongly attributed, as it depicts the aftermath of the same festival in 2015.

The EU policy against disinformation, from self-regulation to co-regulation

Tackling disinformation requires a multistakeholder and multi-disciplinary approach. There is no single silver bullet. Publishing fact-checks, improving media literacy and conducting research all play key roles in the overall framework of building societal resilience. Recent evidence shows that inoculation against disinformation has promising outputs. All this is complemented by – and in some cases coordinated by – a specific policy approach. It is important to remember that disinformation as such is not illegal at an EU policy level and any response needs to respect fundamental rights.

Recent EU policy developments targeting online disinformation reflect a self-regulatory approach that is evolving into co-regulation and is observed with interest globally. In June 2022, online platforms, fact-checkers, advertisers, civil society organizations signed the Strengthened Code of Practice on Disinformation. The latter is a self-regulatory tool, owned by its signatories, with commitments on eight macro areas, including demonetisation and empowerment of users, of researchers and of fact-checking communities.

Meanwhile, the Digital Services Act (DSA) entered into force in November 2022 and will be applicable across the EU Member States in early 2024. Under the DSA, very large online platforms and very large search engines (VLOPs) have to run assessments to evaluate the systemic risk arising from their services or from the use made of their services. Risk mitigations measures should then be adopted accordingly. In other words, they need to assess whether their services or the way they are structured, could be misused by users to cause harm or behave illegally. Adhering and complying with given codes of conducts by VLOPs may be considered, under the DSA, as a risk mitigation measure. The aforementioned Code of Practice, as stated in its preamble, aims to become a Code of Conduct under the DSA.

In addition, online platforms have their own content moderation policies, including specifics on climate change. Examples include TikTok, which does not allow climate change disinformation undermining well-established scientific consensus and META which labels and reduces visibility of verified climate change disinformation.

What happens in case of a crisis under the DSA?

The DSA, in Article 36, includes specific provisions for crisis cases. According to the crisis response mechanism, the need for specific measures to be taken urgently by VLOPs may arise. Upon recommendation of the Digital Coordinators, the European Commission (EC) can require VLOPs to take one or more specific actions, such as to i) assess if and how the functioning and use of their service significantly contribute to a serious threat, ii) identify and apply specific and proportionate measures to mitigate such risk and iii) report regularly.

The DSA also foresees the possibility for the EC to initiate the drawing up of voluntary crisis protocols for rapid cross border coordination, in cases such as when VLOPs are misused to spread disinformation in a particular moment in which there is a need for rapid dissemination of reliable information. Those protocols must respect the provisions of the DSA and do not entail a general obligation to monitor information. According to the DSA, these actions are limited to a period of three months which can be extended for a further three months.

A crisis, under Article 36, is deemed to have occurred where extraordinary circumstances lead to a serious threat to public security or public health in the Union or in significant parts of it.

Could a crisis be declared relating to the climate emergency?

The effects of climate change can be the cause of extraordinary circumstances that lead to serious threats to public health and/or public security. The DSA will be applicable from early 2024 and the Commission is working on the delegated acts, which cannot change the essential elements of the law but are binding. In other words, further details may arise. It will be interesting to see how crises, like natural disasters, that are linked to climate change will be approached under the DSA. Would this link need to be assessed, and if so, by whom? As an extreme scenario, if the climate emergency turns into a (global) crisis, how will that be dealt with? Would the DSA measures be fit for such a situation? Considering that this would be by definition a global crisis, would EU policy have the potential to develop into a global standard?

As with all new legislative instruments, questions on specific implementation cases arise and answers are provided naturally in due course. The climate emergency is another example of a key challenge posed by disinformation: how to address harmful content while respecting freedom of expression. It is also – like the Covid-19 pandemic – an example of how the focus and impact of disinformation goes well beyond elections.

Finally, it may be worth exploring whether disinformation on climate change/emergency could be tackled also from other policy angles. As a matter of example, what if online platforms include the impact of disinformation within their Scope 3 Indirect GHG Emissions reporting? Would that be an incentive to mitigate the risks arising from the design or functioning of their services?

This post represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

Featured image: Photo by Li-An Lim on Unsplash

About the author

Paula Gori

Paula Gori is the Secretary-General and Coordinator of EDMO. She joined the School of Transnational Governance at the European University Institute in 2017 where she is a member of the management team. Prior she was the Coordinator of the Florence School of Regulation – Communications and Media, which offers training, policy and research activities on electronic communications regulation and competition and she collaborated with the Centre for Media Pluralism and Media Freedom, which she coordinated during the initial set-up phase back in 2012. She was for several years the Scientific Coordinator of the Annual Conference on Postal and Delivery Economics and she is one of the authors of the report for the European Commission on European Union competences in respect of media pluralism and media freedom. Paula has a legal background and is a qualified civil mediator.

Posted In: EU Media Policy

Leave a Reply

Your email address will not be published. Required fields are marked *