LSE - Small Logo
LSE - Small Logo

Jędrzej Niklas

November 16th, 2023

The politics of environmental AI

0 comments | 5 shares

Estimated reading time: 5 minutes

Jędrzej Niklas

November 16th, 2023

The politics of environmental AI

0 comments | 5 shares

Estimated reading time: 5 minutes

Jędrzej Niklas, a Marie Skłodowska-Curie fellow at at the Polish Academy of Science and a Visting Fellow at LSE, writes here about some of the findings from his DATAFORESTS project regarding how AI is used in environmental conservation. 

Artificial intelligence (AI) systems, drones, satellites and other data technologies are reshaping environmental conservation, agriculture and the management of natural resources. Advanced computer systems are providing new methods for analyzing climate change or lost biodiversity, and shape important decisions about wildlife and society. However, such technologies also bring significant challenges and trades-off. From the opaqueness of algorithmic decision making and the complexity of data governance to the growing role of tech companies, those issues are now posing important questions for environmental politics.

Fast, direct, and precise

Partnering with NASA in July of this year, the Brazilian government gained access to high-quality satellite imagery of the Amazonian forest. As part of this collaboration, advanced satellites will be deployed, capable of capturing exceptionally precise photographs, showing wildfires, soil erosions or species compositions. Those new satellites will be part of bigger infrastructure that Brazil has been using to detect land clearances and combat deforestation, which is a pressing issue in the region. Satellites are particularly important for studying rainforests, as they allow observation and data collection in a very remote places and at a large scale. Technological progress allows cheaper and faster data gathering, as it is often delivered to end-users in real time. This also creates a difference in comparison to traditional observations of nature which have been done from the ground, as more data is now gathered automatically from space.

Such vast amounts of information enhance decision-making regarding environmental challenges and contribute to the development of advanced computational systems for sophisticated analytics. In the Brazilian scenario, AI models combined satellite data with other information sources (like historical or criminal data) not only to indicate deforestation locations but also to predict when and where it might occur. In other contexts, AI systems are also employed to track water loss, optimize the use of fertilizers in farming, locate endangered species  or protect natural parks from poachers.  This potential has been widely recognized at the policy level, with both the European Union and the UN acknowledging digital technologies as instrumental in achieving climate goals and facilitating evidence-based environmental decisions. However, as useful as these technologies may be, they also come with challenges and concerns.

Complex systems and question about environmental biases

Predictions and analysis made by AI systems usually rely on very complex data operations that make them often opaque and hard to understand, even by specialists. This is a widely known problem in the debate about AI, where transparency and “explainability” are acknowledged as key challenges. This concern extends to environmental area, where such technologies can underpin decisions on vital matters like water resources management, poaching prevention, or reforestation plans. These are not banal issues, as technologies can affect significantly human-environment relations.

Understanding the rationale of particular AI models, their character and design can be particularly important. Numerous reports explain that AI systems can perpetuate bias due to skewed data, potentially resulting in discrimination against specific communities. Similar issues may arise in environmental contexts. While data collection has become more accessible, it is still dependent on existing access to electricity, Internet, financial resources and institutional factors. Some areas lack such provisions, resulting in an inherent imbalance where certain regions, ecosystems, habitats, or species are inadequately represented. This can lead to grave consequences. For example, erroneous observations may trigger the “Romeo error”- prematurely declaring a species extinct and thus halting conservation efforts, and failing to protect shrinking habitats of such species. What information is being recorded is a big issue, as it can lead to AI systems that are not adequately depicting reality. For example, certain cases demonstrate that AI used for agriculture can inherit data that has been trained on crops, soil chemistry and farming practices in Europe or the US. This presents a significant challenge if such tools are applied in locations with different local contexts and agriculture traditions.

Additionally, technologies can embody values and political choices. For instance, tools designed to counter illegal wildlife tracking frequently prioritize surveillance of poachers rather than monitoring the entire supply chain of poached products, reflecting a militarized conservation approach with potential negative impacts on local communities.

New actors and environmental impacts

But proliferation of AI is not only about what is collected and when, but also about who is designing and selling those technologies. In the environmental tech sector there is a broad variety of actors: universities, start-ups and non-profits, as well as major tech corporations. Data and computer experts bring new logics and understandings about nature. Additionally, some of those actors have their own particular economic interests. One of the potential challenges may be that the safeguarding of trade secrets and other business regulations introduces an additional layer of complexity in granting access to systems and subjecting them to scrutiny. Moreover, prominent tech giants like Google or Microsoft have established dedicated programs and grant systems that emphasize sustainability and conservation efforts. While such collaborations can yield benefits, these companies also espouse their own values and visions for the future. Consequently, certain environmental issues or perspectives that align with their business interests may receive disproportionate attention and resources, potentially diverting focus from critical but less commercially viable areas.

The dependency on AI and other data systems  also raises questions about their sustainability. For example, data centers, essential for storing and processing vast amounts of information, demand significant energy consumption, contributing to greenhouse gas emissions. In countries like Ireland, the substantial energy requirements of these centers have become a pressing concern, accounting for almost one fifth of all electricity used.  Research also reveals that training large AI models is an extremely energy- and water– consuming activity. Additionally, the production of hardware and equipment, including batteries that contain minerals like lithium, further adds to environmental strain due to the adverse impacts of resource extraction. There are also other small-scale consequences of using data technologies: as multiple researchers, different sensors, cameras and drones used for conservation purposes may stress the animals, affecting their behaviors and reproduction.

Old regulations, new regulations and ethical AI

These different problems around quality of data, political economy or sustainability of AI systems naturally bring questions about governance and specific regulatory challenges posed in the environmental sector. Of course there is no lack of laws and institutions regarding environmental issues, some of which directly affect how AI systems are used. For example, battles around access to environmental information and “right-to-know” have been an important element of ecological movement for decades. Scandals and disasters like those in Bhopal and Chernobyl have prompted calls for greater awareness and disclosure of information pertaining to toxic substances, pollution, and carbon emissions. Numerous court battles and protests eventually led to a series of laws and regulations in the 1990s that focused on granting access to environmental information and democratization of environmental governance, such as the Aarhus Convention. AI systems developed in this area will have to meet those requirements, however, how it will happen in practice is still an open question.

There is also growing discussion about ethical AI and regulation of AI systems. There are numerous examples of principles and guidelines for developing “human-centric” or “ethical” AI. However, environmental issues have got rather limited attention. Some exceptions include the “Biosphere Code Manifesto,” which provides principles for developing technologies with the goal of protecting and strengthening ecosystems. The EU’s new law on AI is also touching upon environmental questions by calling for AI systems to be developed and used in a sustainable and environmentally friendly manner. These are however still narrowly targeted and limited interventions.

The consequences of using AI in an environmental context are significant and lead to much broader questions about the role of technologies in the management of nature and ecosystems. It is crucial to understand who develops and sells such technologies, the circumstances in which they are designed, and their intended purposes. As these technologies become more prominent, their impact on environmental policies will also grow in influence. We also probably need more profound understanding of how technology, society, and nature interact with each other. Some argue for the introduction of new paradigms in technology development and thinking, incorporating a more pluralistic view that considers the diverse ways in which humans connect with nature. Others criticize the solutionist logic that perceives technological systems as ultimate solutions to various “big” problems, including those related to the environment. The practical minimum, however, could involve better contextualizing technology within a holistic relationship with ecosystems. This approach would help us comprehend how data-driven interventions allow us to protect nature, address the climate crisis, and promote a more sustainable future.

This article was published as part of DATAFORESTS project: “Public Forest Governance in a Datafied Society”, housed by the Polish Academy of Science. The project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 847639 and from the Ministry of Education and Science.

This post represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

About the author

Jędrzej Niklas

Dr. Jędrzej (Jedrek) Niklas is a socio-legal scholar whose research focuses on the complex relationship between technology, governance, and social justice. He has extensively written about the proliferation of data technologies in the public sector, the evolution of digital rights, and issues surrounding automated discrimination. Jedrzej currently is a Marie Skłodowska-Curie fellow at at the Polish Academy of Science. In his newest project he called "DATAFORESTS: Public Forest Governance in a Datafied Society” he investigates the challenges and opportunities arising from the integration of data technologies into the governance of the planet's natural resources, particularly forests. Before his current position, Jędrzej worked as a Postdoctoral Researcher at the London School of Economics and was a member of the Data Justice Lab team at Cardiff University. Jędrzej holds a Ph.D. in law, specializing in international public law, from the University of Warsaw. Prior to his academic career, he spent six years as a legal and policy specialist at the Panoptykon Foundation, a Polish civil society organization. In this role, he actively shaped policies related to data protection (including notably GDPR), automated decision-making, and the surveillance of vulnerable groups.

Posted In: Artificial Intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *