Articles

Sustainability and AI #2: AI for Good, or AI Gone Wrong? Addressing the Ethical Challenges

In our previous discussion, we explored an overview of the transformative potential of AI in advancing sustainability. However, the deployment of AI technologies is not without its ethical challenges. It promises the best of times and the worst of times all at once. Nate Silver recently stated that “AI is the highest-stakes game of poker in the world right now. Even in a bearish case, where we merely achieve modest improvements over current LLMs and other technologies like driverless cars, far short of artificial superintelligence (ASI), it will be at least an important technology. Probably at least a high 7 or low 8 on what I call the Technological Richter Scale, with broadly disruptive effects on the distribution of wealth, power, agency, and how society organizes itself. And that’s before getting into p(doom), the possibility that civilization will destroy itself or enter a dystopia because of misaligned AI.” If you have a moment, indulge yourself and go down the Technological Richter Scale and p(doom) rabbit holes, but for now, in short, its impact will be significant and broad.

LinkedIn iconEmail icon
Article content

Therefore, this is one of those moments where with great power comes great responsibility. In the rush to explore and engage with exciting advances in AI, we must ensure that it also aligns with fundamental human rights and has safeguards that minimize unintended consequences. Indeed, as sustainability professionals, the ethical alignment of AI and LLMs may increasingly come under our remit. Is it possible that in the next decade we will be expected to articulate and demonstrate the ethical sustainability of the LLMs used within our organization and supply chains?

For now, though, this is a consideration for the future. Several current challenges are discussed below, and while fears regarding them may not meet the resultant impact, these challenges provide the nascent structure for robust, ethical, and human-centered AI models in the future.

Algorithmic Bias: Algorithmic bias has long been a concern and has now been replicated in LLMs. This occurs when AI systems produce results that unfairly favour certain groups over others. In the context of sustainability, biased AI could lead to inequitable resource allocation or skewed environmental policies. For instance, an AI system designed to optimize water usage might inadvertently prioritize certain crops over others, resulting in unequal water distribution. To mitigate bias, future requirements would necessitate the use of diverse training datasets and regular audits.

Lack of Transparency: Transparency in AI refers to the clarity and understandability of how AI systems make decisions. The term "black box" is often used to describe AI systems whose decision-making processes are opaque. This lack of transparency can lead to mistrust and misuse, particularly in sustainability projects. For example, an AI system managing a smart grid without clear insights into its decision-making process can create significant challenges. While making the role of a senior engineer in the project easier, younger engineers potentially miss out on the vital experience gained through the troubleshooting process. The solution lies in developing explainable AI and ensuring transparent decision-making processes that enable human expertise to step in when required.

Social Justice Concerns: AI has the potential to perpetuate social inequalities. In August 2023, the tutoring company iTutor settled a lawsuit where its AI-powered recruiting software automatically rejected female applicants ages 55 and older, and male applicants ages 60 and older. In sustainability, AI-driven policies might overlook the needs of certain communities, such as low-income neighbourhoods. Ensuring inclusive AI development and equitable policies is crucial to prevent exacerbating social injustices. For example, AI systems used in urban planning must consider the needs of all residents to avoid creating disparities in access to green spaces or clean energy.

Article content

Data Privacy: Data privacy is a critical concern in AI applications. The unauthorized use of personal data to train AI models can lead to significant privacy breaches. In sustainability projects, such breaches can have serious consequences. For example, an AI system monitoring environmental data might inadvertently expose sensitive information about landowners. Implementing robust data governance frameworks and adhering to strict privacy regulations are essential to protect personal information.

Unintended Consequences: AI systems can sometimes produce unforeseen negative impacts. There are several examples of corporate chatbots going “off piste,” undermining organizational values. In sustainability, with a multitude of intersecting considerations, any model needs to be deployed and operated with this key issue in mind. For instance, an AI system designed to reduce energy consumption might inadvertently increase it by mismanaging resources. Proactive risk management and continuous monitoring are key to identifying and addressing these issues early.

The above illustrate some of the key risks facing us in the future regarding the implementation of AI and while the issues noted are not conclusive (there are always unknown unknowns) they present some of the key ethical challenges associated with AI and its application to sustainability. As professionals in this field, we must engage in ethical AI practices and advocate for responsible AI development. By doing so, we can harness the power of AI to drive positive change while mitigating potential risks.

But let's take a step back and think about the bigger picture. AI is a powerful tool that, when used responsibly, can help us tackle some of the most pressing environmental challenges of our time. From optimizing resource use in agriculture to enhancing disaster preparedness and monitoring biodiversity, the potential benefits are immense. However, we must remain vigilant about the ethical implications and ensure that our AI systems are fair, transparent, and inclusive.

As sustainability professionals, we have a unique opportunity to shape the future of AI in a way that aligns with our values and goals. By fostering interdisciplinary collaboration, promoting ethical standards, and continuously learning from our experiences (many skills we already use in our day-to-day roles!) we will be well placed to leverage the potential of AI.