Much ado about phosphorus

‘Life can multiply until all the phosphorus has gone and then there is an inexorable halt which nothing can prevent’ – Isaac Asimov, 1974

WHO: K. Ashley, D. Mavinic, Department of Civil Engineering, Faculty of Applied Science, University of British Columbia, Vancouver, BC, Canada
D. Cordell, Institute for Sustainable Futures, University of Technology, Sydney, Australia

WHAT: A brief history of phosphorus use by humans and ideas on how we can prevent the global food security risk of ‘Peak Phosphorus’

WHEN: 8 April 2011

WHERE: Chemosphere Vol. 84 (2011) 737–746

TITLE: A brief history of phosphorus: From the philosopher’s stone to nutrient recovery and reuse (subs req.)

Phosphorus can be found on the right hand side of your periodic table on the second row down underneath Nitrogen. It’s one of those funny elements that we all need to live and survive and grow things, but is also highly reactive, very explosive and toxic.

It’s in our DNA – in the AGCT bases that connect to form the double helix structure of DNA, the sides of the ladder are held together by phosphodiester bonds. Phosphorus is literally helping to hold us together.

Phosphodiester bonds in DNA (from Introduction to DNA structure, Richard B. Hallick, U Arizona)

Phosphodiester bonds in DNA (from Introduction to DNA structure, Richard B. Hallick, U Arizona)

Phosphorus can be pretty easily extracted from human urine, which was what German alchemist Henning Brandt did in the 1660s in an attempt to create the Philosopher’s Stone which would be able to turn base metals into gold. No seriously, apparently he was committed enough to the idea to distill 50 buckets of his own pee to do this!

What do alchemy, DNA, and human pee have to do with a scientific paper? Well these researchers were looking at how we’ve previously used phosphorus, why it is that we’re now running out of it and what we can learn from history to try and avoid a global food security risk.

Phosphorus comes in three forms – white, black and red. The phosphorus that is mined for fertilizer today is apatite rock containing P2O5 and has generally taken 10 – 15million years to form. However, in traditional short term human thinking, the fact that it takes that long for the rocks to form didn’t stop people from mining it and thinking it was an ‘endless’ resource (just like oil, coal, forests, oceans etc.).

The paper states that originally, phosphorus was used for ‘highly questionable medicinal purposes’ and then doesn’t detail what kinds of whacky things it was used for (boo!). Given the properties of white phosphorus; it’s highly reactive and flammable when exposed to the air, can spontaneously combust and is poisonous to humans, the mind boggles as to what ‘medicinal’ uses phosphorus had.

The major use of phosphorus is as an agricultural fertilizer, which used to be achieved through the recycling of human waste and sewage pre-industrialisation. However, with 2.5million people living in Victorian-era London, the problems of excess human waste become unmanageable and led to all kinds of nasty things like cholera and the ‘Great Stink’ of the Thames in 1858 that was so bad that it shut down Parliament.

This led to what was called the ‘Sanitary Revolution’ aka the invention of flush toilets and plumbing on a large scale. This fundamentally changed the phosphorus cycle – from a closed loop of localised use and reuse to a more linear system as the waste was taken further away.

After the Second World War, the use of mined mineral phosphorus really took off – the use of phosphorus as a fertilizer rose six fold between 1950-2000 – and modern agricultural processes are now dependent on phosphorus based fertilizers. This has led to major phosphorus leakage into waterways and oceans from agricultural runoff creating eutrophication and ocean deadzones from excess phosphorus.

Eutrophication in the sea of Azov, south of the Ukraine  (SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE)

Eutrophication in the sea of Azov, south of the Ukraine
(SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE)

The problem here is, that we’ve switched from a closed loop system where the waste from the farm house goes into the farm yard and all the phosphorus can recycle, to a linear system where the phosphorus gets mined, used as fertilizer and much of it runs off into the ocean. It’s not even a very efficient system – only a fifth of the phosphorus mined for food production actually ends up in the food we eat.

The problem that we’re now facing is the long term ramifications of this new system where phosphorus has become a scarce global resource and we’ve now been forced to start mining the rocks that have lower quality phosphorus with higher rates of contaminants and are more difficult to access. We’re down to the tar sands equivalent of minable phosphorus, most of which is found in only five countries; Morocco, China, the USA, Jordan and South Africa. Maybe they can be the next OPEC cartel for phosphorus?

Peak phosphorus is likely to happen somewhere between 2030 and 2040, which is where the scary link to climate change comes in. The researchers cheerfully call phosphorus shortages the ‘companion show-stopper to climate change’, by which they mean that soils will start to run out of the nutrients they need at about the same time that extended droughts from climate change will be diminishing crop yields and we’ll have about 9 billion people to scramble to feed.

Basically, a phosphorus shortage is something that we can easily avoid through better and more efficient nutrient recycling, but it’s something that will kick us in the ass once we’re already struggling to deal with the consequences of climate change. The paper states that we need to start re-thinking our ‘western style’ of sewage treatment to better recover water, heat, energy, carbon, nitrogen and phosphorus from our waste systems. This doesn’t mean (thankfully) having to return to a middle ages style of living – it means having cities that are innovative enough about their municipal systems (I was surprised to find out that sewage treatment is one of the most expensive and energy intensive parts of public infrastructure).

The False Creek Neighbourhood Energy Utility in Vancouver

The False Creek Neighbourhood Energy Utility in Vancouver

In Vancouver, we’re already starting to do that with the waste cogeneration system at Science World and the False Creek Neighbourhood Energy Utility that produces energy from sewer heat.

It’s pretty logical; we need to re-close the loop on phosphorus use and we need to do it sensibly before our failure to stop burning carbon means ‘Peak Phosphorus’ becomes the straw that breaks the camel’s proverbial back.

Pandora’s Permafrost Freezer

What we know about permafrost melt is less than what we don’t know about it. So how do we determine the permafrost contribution to climate change?

WHO: E. A. G. Schuur, S. M. Natali, C. Schädel, University of Florida, Gainesville, FL, USA
B. W. Abbott, F. S. Chapin III, G. Grosse, J. B. Jones, C. L. Ping, V. E. Romanovsky, K. M. Walter Anthony University of Alaska Fairbanks, Fairbanks, AK, USA
W. B. Bowden, University of Vermont, Burlington, VT, USA
V. Brovkin, T. Kleinen, Max Planck Institute for Meteorology, Hamburg, Germany
P. Camill, Bowdoin College, Brunswick, ME, USA
J. G. Canadell, Global Carbon Project CSIRO Marine and Atmospheric Research, Canberra, Australia
J. P. Chanton, Florida State University, Tallahassee, FL, USA
T. R. Christensen, Lund University, Lund, Sweden
P. Ciais, LSCE, CEA-CNRS-UVSQ, Gif-sur-Yvette, France
B. T. Crosby, Idaho State University, Pocatello, ID, USA
C. I. Czimczik, University of California, Irvine, CA, USA
J. Harden, US Geological Survey, Menlo Park, CA, USA
D. J. Hayes, M. P.Waldrop, Oak Ridge National Laboratory, Oak Ridge, TN, USA
G. Hugelius, P. Kuhry, A. B. K. Sannel, Stockholm University, Stockholm, Sweden
J. D. Jastrow, Argonne National Laboratory, Argonne, IL, USA
C. D. Koven, W. J. Riley, Z. M. Subin, Lawrence Berkeley National Lab, Berkeley, CA, USA
G. Krinner, CNRS/UJF-Grenoble 1, LGGE, Grenoble, France
D. M. Lawrence, National Center for Atmospheric Research, Boulder, CO, USA
A. D. McGuire, U.S. Geological Survey, Alaska Cooperative Fish and Wildlife Research Unit, University of Alaska, Fairbanks, AK, USA
J. A. O’Donnell, Arctic Network, National Park Service, Fairbanks, AK, USA
A. Rinke, Alfred Wegener Institute, Potsdam, Germany
K. Schaefer, National Snow and Ice Data Center, Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder, CO, USA
J. Sky, University of Oxford, Oxford, UK
C. Tarnocai, AgriFoods, Ottawa, ON, Canada
M. R. Turetsky, University of Guelph, Guelph, ON, Canada
K. P. Wickland, U.S. Geological Survey, Boulder, CO, USA
C. J. Wilson, Los Alamos National Laboratory, Los Alamos, NM, USA
 S. A. Zimov, North-East Scientific Station, Cherskii, Siberia

WHAT: Interviewing and averaging the best estimates by world experts on how much permafrost in the Arctic is likely to melt and how much that will contribute to climate change.

WHEN: 26 March 2013

WHERE: Climactic Change, Vol. 117, Issue 1-2, March 2013

TITLE: Expert assessment of vulnerability of permafrost carbon to climate change (open access!)

We are all told that you should never judge a book by its cover, however I’ll freely admit that I chose to read this paper because the headline in Nature Climate Change was ‘Pandora’s Freezer’ and I just love a clever play on words.

So what’s the deal with permafrost and climate change? Permafrost is the solid, permanently frozen dirt/mud/sludge in the Arctic that often looks like cliffs of chocolate mousse when it’s melting. The fact that it’s melting is the problem, because when it melts, the carbon gets disturbed and moved around and released into the atmosphere.

Releasing ancient carbon into the atmosphere is what humans have been doing at an ever greater rate since we worked out that fossilised carbon makes a really efficient energy source, so when the Arctic starts doing that as well, it’s adding to the limited remaining carbon budget our atmosphere has left. Which means melting permafrost has consequences for how much time humanity has left to wean ourselves off our destructive fossil fuel addiction.

Cliffs of chocolate mousse (photo: Mike Beauregard, flickr)

Cliffs of chocolate mousse (photo: Mike Beauregard, flickr)

 How much time do we have? How much carbon is in those cliffs of chocolate mousse? We’re not sure. And that’s a big problem. Estimates in recent research think there could be as much as 1,700 billion tonnes of carbon stored in permafrost in the Arctic, which is much higher than earlier estimates from research in the 1990s.

To give that very large number some context, 1,700 billion tonnes can also be called 1,700 Gigatonnes, which should ring a bell for anyone who read Bill McKibben’s Rolling Stone global warming math article. The article stated that the best current estimate for humanity to have a shot at keeping global average temperatures below a 2oC increase is a carbon budget of 565Gt. So if all the permafrost melted, we’ve blown that budget twice.

What this paper did, was ask the above long list of experts on soil, carbon in soil, permafrost and Arctic research three questions over three different time scales.

  1. How much permafrost is likely to degrade (aka quantitative estimates of surface permafrost degradation)
  2. How much carbon it will likely release
  3. How much methane it will likely release

They included the methane question because methane has short term ramifications for the atmosphere. Methane ‘only’ stays in the atmosphere for around 100 years (compared to carbon dioxide’s 1000 plus years) and it has 33 times the global warming potential (GWP) of CO2 over a 100 year period. So for the first hundred years after you’ve released it, one tonne of methane is as bad as 33 tonnes of CO2. This could quickly blow our carbon budgets as we head merrily past 400 parts per million of CO2 in the atmosphere from human forcing.

The time periods for each question were; by 2040 with 1.5-2.5oC Arctic temperature rise (the Arctic warms faster than lower latitudes), by 2100 with between 2.0-7.5oC temperature rise (so from ‘we can possibly deal with this’ to ‘catastrophic climate change’), and by 2300 where temperatures are stable after 2100.

The estimates the experts gave were then screened for level of expertise (you don’t want to be asking an atmospheric specialist the soil questions!) and averaged to give an estimate range. For surface loss of permafrost under the highest warming scenario, the results were;

  1. 9-16% loss by 2040
  2. 48-63% loss by 2100
  3. 67-80% loss by 2300

Permafrost melting estimates for each time period over four different emissions scenarios (from paper)

Permafrost melting estimates for each time period over four different emissions scenarios (from paper)

Ouch. If we don’t start doing something serious about reducing our carbon emissions soon, we could be blowing that carbon budget really quickly.

For how much carbon the highest warming scenario may release, the results were;

  1. 19-45billion tonnes (Gt) CO2 by 2040
  2. 162-288Gt CO2 by 2100
  3. 381-616Gt CO2 by 2300

Hmm. So if we don’t stop burning carbon by 2040, melting permafrost will have taken 45Gt of CO2 out of our atmospheric carbon budget of 565Gt. Let’s hope we haven’t burned through the rest by then too.

However, if Arctic temperature rises were limited to 2oC by 2100, the CO2 emissions would ‘only’ be;

  1. 6-17Gt CO2 by 2040
  2. 41-80Gt CO2 by 2100
  3. 119-200Gt CO2 by 2300

That’s about a third of the highest warming estimates, but still nothing to breathe a sigh of relief at given that the 2000-2010 average annual rate of fossil fuel burning was 7.9Gt per year. So even the low estimate has permafrost releasing more than two years worth of global emissions, meaning we’d have to stop burning carbon two years earlier.

When the researchers calculated the expected methane emissions, the estimates were low. However, when they calculated the CO2 equivalent (CO2e) for the methane (methane being 33 times more potent than CO2 over 100 years), they got;

  1. 29-60Gt CO2e by 2040
  2. 250-463Gt CO2e by 2100
  3. 572-1004Gt CO2e by 2300

Thankfully, most of the carbon in the permafrost is expected to be released as the less potent carbon dioxide, but working out the balance between how much methane may be released into the atmosphere vs how much will be carbon dioxide is really crucial for working out global carbon budgets.

The other problem is that most climate models that look at permafrost contributions to climate change do it in a linear manner where increased temps lead directly to an increase in microbes and bacteria and the carbon is released. In reality, permafrost is much more dynamic and non-linear and therefore more unpredictable, which makes it a pain to put into models. It’s really difficult to predict abrupt thaw processes (as was seen over 98% of Greenland last summer) where ice wedges can melt and the ground could collapse irreversibly.

These kinds of non-linear processes (the really terrifying bit about climate change) made the news this week when it was reported that the Alaskan town of Newtok is likely to wash away by 2017, making the townspeople the first climate refugees from the USA.

The paper points out that one of the key limitations to knowing exactly what the permafrost is going to do is the lack of historical permafrost data. Permafrost is in really remote hard to get to places where people don’t live because the ground is permanently frozen. People haven’t been going to these places and taking samples unlike more populated areas that have lengthy and detailed climate records. But if you don’t know how much permafrost was historically there, you can’t tell how fast it’s melting.

The key point from this paper is that even though we’re not sure exactly how much permafrost will contribute to global carbon budgets and temperature rise, this uncertainty alone should not be enough to stall action on climate change.

Yes, there is uncertainty in exactly how badly climate change will affect the biosphere and everything that lives within it, but currently our options range from ‘uncomfortable and we may be able to adapt’ to ‘the next mass extinction’.

So while we’re working out exactly how far we’ve opened the Pandora’s Freezer of permafrost, let’s also stop burning carbon. 

Wind Power Kicks Fossil Power Butt

What if you ran the numbers for wind power replacing all fossil fuel and nuclear electricity in Canada? How could it work? How much would it cost?

WHO:  L.D. Danny Harvey, Department of Geography, University of Toronto, Canada

WHAT: Mapping and calculating the potential for wind electricity to completely replace fossil fuel and nuclear electricity in Canada

WHEN: February 1st, 2013

WHERE: Energy Vol. 50, 1 February 2013

TITLE: The potential of wind energy to largely displace existing Canadian fossil fuel and nuclear electricity generation (subs req.)

As a kid, I really loved the TV series Captain Planet. I used to play it in the school yard with my friends and I always wanted to be the one with the wind power. Mostly because my favourite colour is blue, but also because I thought the girl with the wind power was tough.

Go Planet! Combining the power of wind, water, earth, fire and heart (Wikimedia commons)

Go Planet! Combining the power of wind, water, earth, fire and heart (Wikimedia commons)

What’s my childhood got to do with this scientific paper? Well, what if you looked at the Canadian Wind Energy Atlas and worked out whether we could harness the power of wind in Canada to replace ALL fossil fuel and nuclear electricity? How would you do it? How much would it cost? That’s what this researcher set out to discover (in the only paper I’ve written about yet that has a single author!)

Refreshingly, the introduction to the paper has what I like to call real talk about climate change. He points out that the last time global average temperatures increased by 1oC, sea levels were 6.6 – 9.4m higher, which means ‘clearly, large and rapid reductions in emissions of CO2 and other greenhouse gases are required on a worldwide basis’.

Of global greenhouse gas emissions electricity counts for about 25%, and while there have been studies in the US and Europe looking at the spacing of wind farms to reduce variability for large scale electricity generation, no-one has looked at Canada yet.

So how does Canada stack up? Really well. In fact, the paper found that Canada has equivalent wind energy available for many times the current demand for electricity!

The researcher looked at onshore wind and offshore wind for 30m, 50m and 80m above the ground for each season to calculate the average wind speed and power generation.  Taking into account the wake effect of other turbines and eliminating areas that can’t have wind farms like cities, mountains above 1,600m elevation (to avoid wind farms on the Rocky Mountains), shorelines (to avoid wind farms on your beach) and wetlands, the paper took the Wind Energy Atlas and broke the map into cells.

For calculating your wind farm potential there are generally three options; you can maximise the electricity production, maximise the capacity factor, or minimise the cost of the electricity. The paper looked at all three options and found that the best overall option (which gives you a better average cost in some cases) was to aim for maximum capacity.

Using wind data and electricity demand data from 2007, the researcher ran the numbers. In 2007, the total capacity of fossil fuel and nuclear electricity was 49.0GW (Gigawatts), or 249.8TWh (Terrawatt hours) of generation. This is 40% of the total national electricity capacity for Canada of 123.9GW or 616.3TWh generation.

To deal with the issue of wind power being intermittent, the paper noted that there’s already the storage capacity for several years electricity through hydro in Quebec and Manitoba, as well as many other options for supply-demand mismatches (which this paper doesn’t address) making a national wind electricity grid feasible.

To run the numbers, the country was split into 5 sectors and starting with the sector with the greatest wind energy potential, the numbers were run until a combination was found where the wind energy in each sector met the national fossil fuel and nuclear requirements.

Wind farms required in each sector to provide enough electricity to completely replace the fossil fuel and nuclear power used in 2007 (from paper)

Wind farms required in each sector to provide enough electricity to completely replace the fossil fuel and nuclear power used in 2007 (from paper)

Once the researcher worked out that you could power the whole country’s fossil fuel and nuclear electricity with the wind energy from any sector, he looked at minimising costs and meeting the demand required for each province.

He looked at what size of wind farm would be needed, and then calculated the costs for infrastructure (building the turbines) as well as transmission (getting the electricity from the farm to the demand). Some offshore wind in BC, Hudson Bay, and Newfoundland and Labrador, combined with some onshore wind in the prairies and Quebec and that’s all we need.

The cost recovery for the investment on the infrastructure was calculated for 20 years for the turbines and 40 years for the transmission lines. The paper found that minimising transmission line distance resulted in the largest waste generation in winter, but smallest waste in the summer, however overall, the best method was to aim for maximising the capacity factor for the wind farms.

But the important question – how much would your power cost? On average, 5-7 cents per kWh (kilowatt hour), which is on par with the 7c/kWh that BC Hydro currently charges in Vancouver. Extra bonus – wind power comes without needing to mine coal or store radioactive nuclear waste for millions of years!

Estimated wind power costs for Canada (from paper)

Estimated wind power costs for Canada (from paper)

Some more food for thought – the researcher noted that the estimated cost for coal fired electricity with (still unproven) carbon capture and storage technology is likely to be around 9c/kWh, while the current cost for nuclear generated electricity is between 10-23c/kWh. Also, the technical capacity factor for turbines is likely to increase as the technology rapidly improves, which will reduce the cost of producing wind electricity all over again.

This is all great news – Canada has the wind energy and the potential to build a new industry to not only wean ourselves off the fossil fuels that are damaging and destabilising our atmosphere, but to export that knowledge as well. We can be an energy superpower for 21st Century fuels, not fossil fuels. I say let’s do it!

Crash Diets and Carbon Detoxes: Irreversible Climate Change

Much of the changes humans are causing in our atmosphere today will be largely irreversible for the rest of the millennium.

WHO: Susan Solomon, Chemical Sciences Division, Earth System Research Laboratory, National Oceanic and Atmospheric Administration, Boulder, Colorado, USA
Gian-Kasper Plattner, Institute of Biogeochemistry and Pollutant Dynamics, Zurich, Switzerland
Reto Knutti, Institute for Atmospheric and Climate Science, Zurich, Switzerland,
Pierre Friedlingstein, Institut Pierre Simon Laplace/Laboratoire des Sciences du Climat et de  l’Environnement, Unité Mixte de Recherche à l’Energie Atomique – Centre National de la Recherche Scientifique–Université Versailles Saint-Quentin, Commissariat a l’Energie Atomique-Saclay, l’Orme des Merisiers, France

WHAT: Looking at the long term effects of climate pollution to the year 3000

WHEN: 10 February 2009

WHERE: Proceedings of the National Academy of Sciences of the USA (PNAS), vol. 106, no. 6 (2009)

TITLE: Irreversible climate change due to carbon dioxide emissions

Stopping climate change often involves the metaphor of ‘turning down the thermostat’ of the heater in your house; the heater gets left on too high for too long, you turn the thermostat back down, the room cools down, we are all happy.

This seems to also be the way many people think about climate change – we’ve put too much carbon pollution in the atmosphere for too long, so all we need to do is stop it, and the carbon dioxide will disappear like fog burning off in the morning.

Except it won’t. This paper, which is from 2009 but I came across it recently while reading around the internet, looks at the long term effects of climate change and found that for CO2 emissions, the effects can still be felt for 1,000 years after we stop polluting. Bummer. So much for that last minute carbon detox that politicians seem to be betting on. Turns out it won’t do much.

The researchers defined ‘irreversible’ in this paper at 1,000 years to just beyond the year 3000, because over a human life span, 1,000 years is more than 10 generations. Geologically, it’s not forever, but from our human point of view it pretty much is forever.

So what’s going to keep happening because we can’t give up fossil fuels today that your great-great-great-great-great-great-great-great-great-great grandkid is going to look back on and say ‘well that was stupid’?

The paper looked at the three most detailed and well known effects: atmospheric temperatures, precipitation patterns and sea level rise. Other long term impacts will be felt through Arctic sea ice melt, flooding and heavy rainfall, permafrost melt, hurricanes and the loss of glaciers and snowpack. However, the impacts with the most detailed models and greatest body of research were the ones chosen for this paper (which also excluded the potential for geo-engineering because it’s still very uncertain and unknown).

Our first problem is going to be temperature increases, because temperatures increase with increased CO2  accumulation in the atmosphere, but if we turned off emissions completely (which is unfeasible practically, but works best to model the long term effects) temperatures would remain constant within about 0.5oC until the year 3000.

Why does this occur? Why does the temperature not go back down just as quickly once we stop feeding it more CO2? Because CO2 stays in the atmosphere for a much longer time than other greenhouse gases. As the paper says: ‘current emissions of major non-carbon dioxide greenhouse gases such as methane or nitrous oxide are significant for climate change in the next few decades or century, but these gases do not persist over time in the same way as carbon dioxide.’

Temperature changes to the year 3000 with different CO2 concentration peaks (from paper)

Temperature changes to the year 3000 with different CO2 concentration peaks (from paper)

Our next problem is changing precipitation patterns, which can be described by the Clausius-Clapeyron law of the physics of phase transition in matter. What the law tells us is that as temperature increases, there is an increase in atmospheric water vapour, which changes how the vapour is transported through the atmosphere, changing the hydrological cycle.

The paper notes that these patterns are already happening consistent with the models for the Southwest of the USA and the Mediterranean. They found that dry seasons will become approx. 10% dryer for each degree of warming, and the Southwest of the USA is expected to be approx. 10% dryer with 2oC of global warming. As a comparison, the Dust Bowl of the 1930s was 10% dryer over two decades. Given that many climate scientists (and the World Bank) think that we’ve already reached the point where 2oC of warming is inevitable, it seems like Arizona is going to become a pretty uncomfortable place to live.

Additionally, if we managed to peak at 450ppm of CO2, irreversible decreases in precipitation of ~8-10% in the dry season would be expected in large areas of Europe, Western Australia and North America.

Dry season getting dryer around the world (from paper)

Dry season getting dryer around the world (from paper)

Finally, the paper looked at sea level rise, which is a triple-whammy. The first issue is that warming causes colder water to expand (aka thermal expansion) which increases sea level. The second is that ocean mixing through currents will continue, which will continue the warming and the thermal expansion. Thirdly, warming of icecaps on land contributes new volume to the ocean.

The paper estimates that the eventual sea level rise from thermal expansion of warming water is 20 – 60cm per degree of climate change. Additionally, the loss of glaciers and small icecaps will give us ~20 – 70cm of sea level rise too, so we’re looking at 40 – 130cm of sea level rise even before we start counting Greenland (which is melting faster than most estimates anyway).

Sea level rise from thermal expansion only with different CO2 concentration peaks (from paper)

Sea level rise from thermal expansion only with different CO2 concentration peaks (from paper)

What does all of this mean? Well firstly it means you should check how far above sea level your house is and you may want to hold off on that ski cabin with all the melting snowpack as well.

More importantly though, it means that any last minute ‘saves the day’ Hollywood-style plans for reversing climate change as the proverbial clock counts down to zero are misguided and wrong. The climate pollution that we are spewing into the atmosphere at ever greater rates today will continue to be a carbon hangover for humanity for the next 1000 years or so. Within human time scales, the changes that we are causing to our atmosphere are irreversible.

So naturally, we should stop burning carbon now.

Admin Post

scientific evidence

From DeSmogblog.com

Yesterday, much to my excitement, the climate scientist Michael Mann retweeted my most recent blog, thus sending my page views to a new record and attracting several climate deniers to try and comment about how ‘climate change isn’t real’.

On this blog, I aim to bring peer reviewed climate science to a wider audience, so that we can better understand the challenges facing humanity caused by the burning of fossil fuels.

While there are certainly many aspects of climate mitigation that require much discussion and debate, what does not require debate is whether the science of climate change is real.

Anyone who wishes to try and prove climate change is not real (thus attempting to disprove the overwhelming consensus of scientific research) in the comments section will not make it through moderation.

And now for less seriousness, here’s one of my favourite climate videos of all time from Australia

What’s in a Standard Deviation?

“By 2100, global average temperatures will probably be 5 to 12 standard deviations above the Holocene temperature mean for the A1B scenario” Marcott et al.

WHO: Shaun A. Marcott, Peter U. Clark, Alan C. Mix, College of Earth, Ocean, and Atmospheric Sciences, Oregon State University, Corvallis, Oregon, USA
 Jeremy D. Shakun, Department of Earth and Planetary Sciences, Harvard University, Cambridge, Massachusetts, USA.

WHAT: A historical reconstruction of average temperature for the past 11,300 years

WHEN: March 2013

WHERE: Science, Vol. 339 no. 6124, 8 March 2013 pp. 1198-1201

TITLE: A Reconstruction of Regional and Global Temperature for the Past 11,300 Years (subs req.)

We all remember the standard deviation bell curve from high school statistics; where in a population (like your class at school) there will be a distribution of something with most people falling around the mean. The usual one you start off with is looking at the height of everyone in your classroom – most people will be around the same height, some will be taller, some shorter.

The more you vary from the mean, the less likely it is that will happen again because around 68% of the population will fit into the first standard deviation either side of the mean. However, the important bit you need to keep in mind when reading about this paper is that standard deviation curves have three standard deviations on either side of the mean, which covers 99.7% of all the data. The odds that a data point will be outside three standard deviations from the mean is 0.1% either side.

The standard deviation bell curve (Wikimedia commons)

The standard deviation bell curve (Wikimedia commons)

What does high school statistics have to do with global temperature reconstructions? Well, it’s always good to see what’s happening in the world within context. Unless we can see comparisons to what has come before, it can be really hard to see what is and isn’t weird when we’re living in the middle of it.

The famous ‘Hockey Stick’ graph that was constructed for the past 1,500 years by eminent climate scientist Michael Mann showed us how weird the current warming trend is compared to recent geologic history. But how does that compare to all of the Holocene period?

Well, we live in unusual times. But first, the details. These researchers used 73 globally distributed temperature records with various different proxies for their data. A proxy is looking at the chemical composition of something that has been around for more than our thermometers to work out what the temperature would have been. This can be done with ice cores, slow growing trees and marine species like coral. According to the NOAA website, fossil pollen can also be used, which I think is awesome (because it’s kind of like Jurassic Park!).

They used more marine proxies than most other reconstructions (80% of their proxies were marine) because they’re better suited for longer reconstructions. The resolutions for the proxies ranged from 20 years to 500 years and the median resolution was 120 years.

They then ran the data through a Monte Carlo randomisation scheme (which is less exotic than it sounds) to try and find any errors. Specifically, they ran a ‘white noise’ data set with a mean of zero to double check for any errors. Then the chemical data was converted into temperature data before it all got stacked together into a weighted mean with a confidence interval. It’s like building a layer cake, but with math!

Interestingly, with their white noise data, they found the model was more accurate with longer time periods. Variability was preserved best with 2,000 years or more, but only half was left on a 1,000 year scale and the variability was gone shorter than 300 years.

They also found that their reconstruction lined up over the final 1,500 years to present with the Mann et al. 2008 reconstruction and was also consistent with Milankovitch cycles (which ironically indicate that without human interference, we’d be heading into the next glacial period right now).

Temperature reconstructions Marcott et al. in purple (with blue confidence interval) Mann et al. in grey (from paper)

Temperature reconstructions
Marcott et al. in purple (with blue confidence interval) Mann et al. in grey (from paper)

They found that the global mean temperature for 2000-2009 has not yet exceeded the warmest temperatures in the Holocene, which occurred 5,000 – 10,000 years ago (or BP – before present). However, we are currently warmer than 82% of the Holocene distribution.

But the disturbing thing in this graph that made me feel really horrified (and I don’t get horrified by climate change much anymore because I read so much on it that I’m somewhat de-sensitised to the ‘end of the world’ scenarios) is the rate of change. The paper found that global temperatures have increased from the coldest during the Holocene (the bottom of the purple bit before it spikes up suddenly) to the warmest in the past century.

We are causing changes to happen so quickly in the earth’s atmosphere that something that would have taken over 11,000 years has just happened in the last 100. We’ve taken a 5,000 year trend of cooling and spiked up to super-heated in record time.

This would be bad enough on its own, but it’s not even the most horrifying thing in this paper. It’s this:

‘by 2100, global average temperatures will probably be 5 to 12 standard deviations above the Holocene temperature mean for the A1B scenario.’ (my emphasis)

Remember how I said keep in mind that 99.7% of all data points in a population are within three standard deviations on a bell curve? That’s because we are currently heading off the edge of the chart for weird and unprecedented climate, beyond even the 0.1% chance of occurring without human carbon pollution.

The A1B scenario by the IPCC is the ‘medium worst case scenario’ which we are currently outstripping through our continuously growing carbon emissions, which actually need to be shrinking. We are so far out into the tail of weird occurrences that it’s off the charts of a bell curve.

NOAA Mauna Loa CO2 data (396.80ppm at Feb. 2013)

NOAA Mauna Loa CO2 data (396.80ppm at Feb. 2013)

As we continue to fail to reduce our carbon emissions in any meaningful way, we will reach 400ppm (parts per million) of carbon dioxide in the atmosphere in the next few years. At that point we will truly be in uncharted territory for any time in human history, on a trajectory that is so rapidly changing as to be off the charts beyond 99.7% of the data for the last 11,300 years. The question for humanity is; are we willing to play roulette with our ability to adapt to this kind of rapidly changing climate?

Carbon in Your Supply Chain

How will a real price on carbon affect supply chains and logistics?

WHO:  Justin Bull, (PhD Candidate, Faculty of Forestry, University of British Columbia, Canada)
Graham Kissack, (Communications Environment and Sustainability Consultant, Mill Bay, Canada)
Christopher Elliott, (Forest Carbon Initiative, WWF International, Gland, Switzerland)
Robert Kozak, (Professor, Faculty of Forestry, University of British Columbia, Canada)
Gary Bull, (Associate Professor, Faculty of Forestry, University of British Columbia, Canada)

WHAT: Looking at how a price on carbon can affect supply chains, with the example of magazine printing

WHEN: 2011

WHERE: Journal of Forest Products Business Research, Vol. 8, Article 2, 2011

TITLE: Carbon’s Potential to Reshape Supply Chains in Paper and Print: A Case Study (membership req)

Forestry is an industry that’s been doing it tough in the face of rapidly changing markets for a while. From the Clayoquot sound protests of the 1990s to stop clearcutting practices to the growing realisation that deforestation is one of the leading contributors to climate change, it’s the kind of industry where you either innovate or you don’t survive.

Which makes this paper – a case study into how monetising carbon has the potential to re-shape supply chains and make them low carbon – really interesting. From the outset, the researchers recognise where our planet is heading through climate change stating ‘any business that emits carbon will [have to] pay for its emissions’.

To look at the potential for low carbon supply chains, the paper looks at an example of producing and printing a magazine in North America – measuring the carbon emissions from cutting down the trees, to turning the trees into paper, transporting at each stage of the process and the printing process.

Trees to magazines (risa ikead, flickr)

Trees to magazines (risa ikead, flickr)

They did not count the emissions of the distribution process or any carbon emissions related to disposal after it was read by the consumer because these had too many uncertainties in the data. However, they worked with the companies that were involved in the process to try and get the most accurate picture of the process they possibly could.

The researchers found that the majority of the carbon is emitted in the paper manufacturing process (41%) as the paper went from a tree on Vancouver Island, was shipped as fibre to Port Alberni in a truck, manufactured into paper and then shipped by truck and barge to Richmond and then by train to the printing press in Merced, California.

Activity Carbon Emissions (CO2/ADt) Percentage of Total
Harvesting, road-building, felling, transport to sawmills

55kg

12%

Sawmilling into dimensional and residual products

45kg

10%

Transport of chips to mill

8kg

2%

Paper manufacturing process

185kg

41%

Transportation to print facility

127kg

28%

Printing process

36kg

8%

Total

456kg

100%

Supply Chain Emissions (Table 1. Reproduced verbatim from hardcopy)

The case study showed that upstream suppliers consume more energy than downstream suppliers, however downstream suppliers are most visible to consumers, which poses a challenge when trying to get larger emitters to minimise their carbon footprint, as there’s less likelihood of consumer pressure on lesser known organisations.

That being said, there can be major economic incentives for companies to try and minimise their carbon footprint given that Burlington Northern Santa Fe Railways (who shipped the paper from Canada to the printing press in California in this study) spent approximately $4.6billion on diesel fuel in 2008 (the data year for the case study).

Given that California implemented a carbon cap and trade market at the end of 2012 and that increasing awareness of the urgency to reduce our carbon emissions rapidly and significantly means the price of carbon is likely to increase, $4.6billion in diesel costs could rapidly escalate for a company like BNSF. If part or all of their transport costs could be switched to clean energy, as polluting fossil fuel sources are phased out the company will start saving themselves significant amounts.

The companies in this study were very aware of these issues, which is encouraging. They agreed that carbon and sustainability will be considered more closely in the future and that carbon has the potential to change the value of existing industrial assets as corporations who are ‘carbon-efficient’ may become preferred suppliers.

The researchers identified three types of risk that companies could face related to carbon; regulatory risk, financial risk and market access risk. The innovative businesses who will thrive in a low carbon 21st century economy will be thinking about and preparing for operating in an economy that doesn’t burn carbon for fuel, or where burning carbon is no longer profitable.

I really liked the paper’s example of financial risk in the bond market ‘where analysts are projecting a premium on corporate bonds for new coal fired power plants’, meaning it will be harder for companies to raise money to further pollute our atmosphere. This is especially important given that Deutsche Bank and Standard and Poors put the fossil fuel industry on notice last week saying that easy finance for their fossil fuels party is rapidly coming to an end.

Of course, no-one ever wants to believe that the boom times are coming to an end. But the companies that think ahead of the curve and innovate to reduce their carbon risk instead of going hell for leather on fossil fuels will be the ones that succeed in the long run.

Hot Air Balloon – Heat Emissions in London

Detailed measurements of the thermal pollution in Greater London and a look at the long term trend.

WHO: Mario Iamarino, Dipartimento di Ingegneria e Fisica dell’Ambiente, Università degli Studi della Basilicata, Potenza, Italy
Sean Beevers,  Environmental Research Group, King’s College London, London, UK
C. S. B. Grimmond, Environmental Monitoring and Modelling Group, Department of Geography, King’s College London, London, UK

WHAT: Measuring the heat pollution of London with regards with to both the space and time the emissions occur in.

WHEN: July 2011 (online version)

WHERE: International Journal of Climatology (Int. J. Climatol.) Vol. 32, Issue 11, September 2012

TITLE: High-resolution (space, time) anthropogenic heat emissions: London 1970–2025 (subs. req)

Cities have a rather unique position in our world when we think about climate change. They are both the source of the majority of emissions, but also due to their dense nature they are the sites where the greatest innovations and reductions in emissions can occur fastest.

If you want to be able to harness the creativity and innovation of cities to be able to reduce emissions, you need to know what emissions are coming from where and when. You need data first.

That’s where these scientists from Italy and the UK stepped in to work out exactly how much waste heat the city of London was emitting. Waste heat is an issue because it contributes to the Urban Heat Island effect where the temperatures in cities can be higher than the surrounding area, which can be deadly in summer heat waves and lead to greater power usage.

Something interesting that I hadn’t considered before is that concentrated emissions can also change the atmospheric chemistry in a localised area.

The researchers looked at some very detailed data from between 2005-2008 for the city of London and broke it down into as accurate a picture as they could. They looked at heat from buildings looking at the differences between residential and commercial buildings and the different ways the different building uses would emit heat.

They looked at transport data using traffic congestion, the London Atmospheric Emissions Inventory, and data from several acronym-happy government departments to work out from what kinds of vehicles at what times of the day the waste heat was being emitted and at what temperature each different type of fuel combusted. They didn’t count trains (underground or overground) because they are mostly electric (and some of them recover waste heat from breaking) or small boats which didn’t emit enough to be counted.

They looked at the heat emitted by people at various stages of activity, age and time of day assuming a 1:1 ratio of females to males. They even looked at the standard metabolic rates of people to work out how much heat a person emits exercising, resting or sleeping!

Data! London waste heat emissions (from paper)

Data! London waste heat emissions (from paper)

What all that number and formula crunching told them was the total average anthropogenic heat flux for London was 10.9 Wm-2 (watts per square metre). This calculates to be 150 Terawatt hours of waste energy (in the form of heat), which as a comparison is all of the electricity used in Sweden in 2010.

Of that total, 80% came from buildings with 42% being domestic residences and 38% being industrial buildings. The next biggest source of heat was from transport, at 15% of the 10.9 Wm-2. Of the transport category, personal cars were the biggest contributor (64% of the transport portion).

Human heat only contributed 5.1% of the total (0.55 Wm-2), so maybe they’re not doing enough exercise in London? The information had peaks and valleys of heat loss – winter releases more waste heat than summer, because heating systems for winter are more widespread than air conditioners in summer. The industrial building emissions were concentrated in the core of the City of London (especially Canary Wharf where there are many new centrally heated/cooled high rise offices) while the domestic building emissions were much more spread around the centre of London.

Building heat emissions domestic (left) and industrial (right) (from paper)

Building heat emissions domestic (left) and industrial (right) (from paper)

Once they got all the data from 2005 to 2008, they considered trends from 1970 projected out to 2025 to see how great a role heat emissions could play in climate change. Using data from the Department of Energy and Climate Change, the London Atmospheric Emissions Inventory (LAEI) and data on population dynamics and traffic patterns, they estimated that there would be an increase in all contributors to heat emissions unless the UK Greenhouse gas reduction targets are fully implemented.

The reduction targets are for 80% reduction by 2050 (against the baseline of 1990 emissions). From the research indicating that buildings are the biggest heat emitters (and are therefore burning more energy to keep at the right temperature), this would mean there’s a great need for increasing building efficiency to meet those targets.

The paper notes that if the Low Carbon Transition Plan for London is implemented, the average waste heat emissions for London will drop to 9 Wm-2 by 2025, but in the central City of London, the best emissions reductions are likely to only be to 2008 levels, due to the expected growth in the area.

So what does any of this mean? It means London now has the data to know where they can find efficiencies that can complement their greenhouse gas mitigation programs. Because that’s the thing about combating climate change – it’s not a ‘one problem, one solution’ kind of issue. We need to do as many different things as possible all at once to try and add up to the levels of decarbonising that the world needs to be doing to avoid catastrophic climate change. So go forth London, take this data and use it well!

Unprecedented: Melting Before Our Eyes

The volume of Arctic sea ice is reducing faster than the area of sea ice, further speeding the arctic death spiral.

WHO:  Seymour W. Laxon, Katharine A. Giles, Andy L. Ridout, Duncan J. Wingham, Rosemary Willatt, Centre for Polar Observation and Modelling, Department of Earth Sciences, University College London, London, UK
Robert Cullen, Malcolm Davidson, European Space Agency, Noordwijk, The Netherlands
Ron Kwok, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA.
Axel Schweiger, Jinlun Zhang, Polar Science Center, Applied Physics Laboratory, University of Washington, Seattle, Washington, USA
Christian Haas, Department of Earth and Space Science and Engineering, York University, Toronto, Canada.
Stefan Hendricks, Alfred Wegener Institute for Polar and Marine Research, Bremerhaven, Germany
Richard Krishfield, Woods Hole Oceanographic Institution, Woods Hole, Massachusetts, USA
Nathan Kurtz,School of Computer, Math, and Natural Sciences, Morgan State University, Baltimore, Maryland, USA.
Sinead Farrell, Earth System Science Interdisciplinary Center, University of Maryland, Maryland, USA.

WHAT: Measuring the volume of polar ice melt

WHEN: February 2013 (online pre-published version)

WHERE: American Geophysical Union, 2013, doi: 10.1002/grl.50193

TITLE:  CryoSat-2 estimates of Arctic sea ice thickness and volume (subs req.)

Much has been written about the Arctic Death Spiral of sea ice melting each spring and summer, with many researchers attempting to model and predict exactly how fast the sea ice is melting and when we will get the horrifying reality of an ice-free summer Arctic.

But is it just melting at the edges? Or is the thickness, and therefore the volume of sea ice being reduced as well? That’s what these researchers set out to try and find out using satellite data from CryoSat-2 (Science with satellites!).

The researchers used satellite radar altimeter measurements of sea ice thickness, and then compared their results with measured in-situ data as well as other Arctic sea ice models.

A loss of volume in Arctic sea ice is a signifier of changes in the heat exchange between the ice, ocean and atmosphere, and most global climate models predict a decrease in sea ice volume of 3.4% per decade which is larger than the predicted 2.4% per decade of sea ice area.

Sea ice area minimum from September 2012 (image: NASA/Goddard Space Flight Center Scientific Visualization Studio)

Sea ice area minimum from September 2012 (image: NASA/Goddard Space Flight Center Scientific Visualization Studio)

The researchers ran their numbers for ice volume in winter 2010/11 and winter 2011/12, and then used the recorded data sets to check the accuracy of their satellites (calibration for my fellow science nerds).

The most striking thing they found was a much greater loss of ice thickness in the north of Greenland and the Canadian Archipelago. Additionally, they found that the first year ice was thinner in autumn, which made it harder to catch up to average thickness during the winter, and made greater melting easier in summer.

Interestingly, they found that there was additional ice growth in winter between 2010-12 (7,500km3) compared to 2003-08 (5,000km3), which makes for an extra 36cm of ice growth in the winter. Unfortunately the increased summer melt is much greater than the extra growth, so it’s not adding to the overall thickness of the sea ice.

For the period of 2010-12 the satellite measured rate of decline in autumn sea ice was 60% greater than the predicted decline using PIOMAS (Panarctic Ice Ocean Modeling and Assimilation System). Most researchers when seeing results like that might hope that there’s an error, however when measured against the recorded data, the CryoSat-2 data was within 0.1 metres of accuracy. So while astounding, the 60% greater than expected loss of sea ice thickness is pretty spot on.

The researchers think that lower ice thickness at the end of winter in February and March could be contributing to the scarily low September minimums in the arctic death spiral, but the greatest risk here is that the ever increasing melt rate of ice in the arctic could take the climate beyond a tipping point where climate change is both irreversible and uncontrollable in a way we are unable to adapt to.

Visualisation of reduction in arctic sea ice thickness (from Andy Lee Robinson, via ClimateProgress)

Visualisation of reduction in arctic sea ice thickness (from Andy Lee Robinson, via ClimateProgress)

So as usual, my remedy for all of this is: stop burning carbon.

When the Party’s Over: Permian Mass Extinction

“The implication of our study is that elevated CO2 is sufficient to lead to inhospitable conditions for marine life and excessively high temperatures over land would contribute to the demise of terrestrial life.”

WHO: Jeffrey T. Kiehl, Christine A. Shields, Climate Change Research Section, National Center for Atmospheric Research, Boulder, Colorado, USA

WHAT: A complex climate model of atmospheric, ocean and land conditions at the Permian mass extinction 251 million years ago to look at CO2 concentrations and their effect.

WHEN: September 2005

WHERE: Geological Society of America, Geology vol. 33 no. 9, September 2005

TITLE: Climate simulation of the latest Permian: Implications for mass extinction

The largest mass extinction on earth occurred approximately 251million years ago at the end of the Permian geologic era. Almost 95% of all ocean species and 70% of land species died, and research has shown that what probably happened to cause this extinction was carbon dioxide levels.

As the saying goes; those who do not learn from history are doomed to repeat it, so let’s see what happened to the planet 251 million years ago and work out how we humans can avoid doing it to ourselves at high speed.

This research paper from 2005 did the first comprehensive climate model of the Permian extinction, which means their model was complicated enough to include the interaction between the land and the oceans (as different to ‘uncoupled’ models that just looked at one or the other and not how they affected each other).

The researchers used the CCSM3 climate model that is currently housed at the National Centre for Atmospheric Research (NCAR) and is one of the major climate models currently being used by the IPCC to look forward and model how our climate may change with increasing atmospheric carbon pollution (or emission reduction). They organised their model to have ‘realistic boundary conditions’ for things like ocean layers (25 ocean layers for those playing at home), atmospheric resolution and energy system balance. They then ran the simulation for 900 years with current conditions and matched it with observed atmospheric conditions and got all of their data points correct with observed data.

Then, they made their model Permian, which meant taking CO2 concentrations and increasing them from our current 397ppm to 3,550ppm which is the estimated CO2 concentrations from the end of the Permian era.

What did ramping up the CO2 in this manner do for the planet’s living conditions in the model? It increased the global average temperature to a very high 23.5oC (the historic global average temperature for the Holocene (current era) is 14oC).

Oceans
Changing the CO2 concentrations so dramatically in the model changed the global average ocean surface temperature 4oC warmer than current conditions. Looking at all the ocean layers in their model, the water was warmer in deeper areas as well, with some areas at depths of 3000m below sea level measuring 4.5-5oC where they are currently near freezing.

The greatest warming in the oceans occurred at higher latitudes, where ocean temps were modelled at 8oC warmer than present, while equatorial tropical oceans were not substantially warmer. The oceans were also much saltier than they currently are.

The big problem for all of the things that called the ocean home at the end of the Permian era is the slowing of ocean circulation and mixing. Currently, dense salty water cools at the poles and sinks, oxygenating and mixing with deeper water allowing complex organisms to grow and live. If this slows down, which it did in this model, it has serious consequences for all ocean residents.

Current ocean circulation patterns (NOAA, Wikimedia commons)

Current ocean circulation patterns (NOAA, Wikimedia commons)

Their Permian model measured ocean overturning circulation around 10 Sv (million cubic metres per second), whereas current ocean overturning circulation is around 15-23 Sv. The researchers think the ocean currents could have slowed down enough to create anoxic oceans, which are also known as ‘ocean dead zones’ or ‘Canfield Oceans’, and stated that it set the stage for a large-scale marine die off.

Land
If the end of the Permian was pretty nasty for ocean residents, how did it fare for land-dwellers? What happened to the tetrapods of Gondwanaland? Well it looked really different to how earth looks today.

Permian land mass (Wikimedia commons)

Permian land mass (Wikimedia commons)

There were deciduous forests at high latitudes, and the elevated CO2 in the model was the dominant reason for warm, ice free Polar Regions (which also hindered ocean circulation). Land surface temperatures were between 10 – 40oC warmer than they are today. In their model, dry sub-tropical climates like the Mediterranean or Los Angeles and Southern California were much hotter, with the average daily minimum temperatures around 51oC. Yes, Los Angeles, your overnight low could be 51oC.

Understandably, the authors state that ‘these extreme daily temperature maxima in these regions could contribute to a decrease in terrestrial flora and fauna’, which is scientist for ‘it’s so damn hot outside nothing except cacti can grow’.

All of these changes were run over a 2,700 year period in the model, which if you take the 2005 CO2 concentration of 379ppm as your base is an increase of 1.17ppm per year.

This is the important bit to remember if we’re going to learn from history and not go the way of the Permian residents. Our current rate of increase in CO2 concentrations is 2ppm per year, which means we are on a super speed path to mass extinction. If we continue with business as usual, which has been aptly renamed ‘business as suicide’ by climate blogger Joe Romm, we will be at the end of the next mass extinction in around 1,500 years.

Where humanity is headed (from Royal Society Publishing)

Where humanity is headed (from Royal Society Publishing)

All we need to do to guarantee this being the outcome of all of humanity is keep the status quo and keep burning fossil fuels and the entire sum of humans as a species on this planet will be a tiny geological blip where we turned up, became the most successful invasive species on the globe, burned everything in sight and kept burning it even when we knew it was killing us.

However, I think this part from the paper’s conclusion should give most of us a pause for thought;

 ‘Given the sensitivity of ocean circulation to high-latitude warming, it is hypothesized that some critical level of high-latitude warming was reached where connection of surface waters to the deep ocean was dramatically reduced, thus leading to a shutdown of marine biologic activity, which in turn would have led to increased atmospheric CO2 and accelerated warming.’

As a species, if we are going to survive we need to make sure we do not go past any of those critical levels of warming or tipping points. Which means we need to make sure we stop burning carbon as fast as possible. Otherwise, T-Rex outlasted us as a species by about two million years which would be kinda embarrassing.