The Reference System approach is often seen as the poor cousin in Risk Analysis, but once you understand it, you will appreciate that it is actually a very common approach, however it is also the easiest one to get wrong if used in appropriately.
Welcome to Edition #4 of Sunburnt Country. As we hit the midway point of the UN’s ‘Decade of Action’ for the Sustainable Development Goals, this newsletter curates key updates—from shifting global priorities to regulatory changes and corporate innovations.
This week’s article had a different focus but I wanted to change tack based on some news from last month, the initiation of the Coalition for Environmentally Sustainable Artificial Intelligence. This groundbreaking initiative, which kicked off less than a month ago on February 11, 2025, in Paris, marks a pivotal moment in the intersection of technology and environmental stewardship.
The first place to start is with your Industry and/or State Regulator. Many Regulators provide details of Standards or other similar documentation such as Codes of Practice. It is a straightforward task to say that these have already been accepted in the same regulatory environment and therefore they represent Relevant Good Practice and should be complied with as Reasonably Practicable.
This edition of Sunburnt Country explores the latest trends in sustainability, focusing on AI's environmental impact, ESG reporting advancements, and climate change policy challenges.
In our previous discussion, we explored an overview of the transformative potential of AI in advancing sustainability. However, the deployment of AI technologies is not without its ethical challenges. It promises the best of times and the worst of times all at once. Nate Silver recently stated that “AI is the highest-stakes game of poker in the world right now. Even in a bearish case, where we merely achieve modest improvements over current LLMs and other technologies like driverless cars, far short of artificial superintelligence (ASI), it will be at least an important technology. Probably at least a high 7 or low 8 on what I call the Technological Richter Scale, with broadly disruptive effects on the distribution of wealth, power, agency, and how society organizes itself. And that’s before getting into p(doom), the possibility that civilization will destroy itself or enter a dystopia because of misaligned AI.” If you have a moment, indulge yourself and go down the Technological Richter Scale and p(doom) rabbit holes, but for now, in short, its impact will be significant and broad.