Jump to content

Predictive Power of Modern Climatological Models


Stochasty

Recommended Posts

I'm posting this thread to split off a discussion that MBobrik and I were having in another thread regarding Climate Change, since this is one of those issues that can rapidly derail threads and I have no interest in seeing that happen. Specifically, the argument is concerning whether or not modern climatological models have predictive power. To this end, MBobrik posted an article in the Guardian which supports said supposition, whereas I responded with a post from the skeptic blog WattsUpWithThat which claims to refute said supposition.

Leaving aside the snarky tone of the argument we were having in the other thread, I wanted to go into slightly more detail regarding these claims. The article in the Guardian is reporting on a recent paper in Nature Geoscience by Dr. Myles Allen and collaborators regarding the supposed efficacy of a climate forecast made by his group in 1999, compared with the temperature data since. Alas, the paper itself is behind a paywall, so we will have to make do with the Guardian article reporting on it.

A good deconstruction of Dr. Allen's claims of accurate predictions can be found here (also from WUWT); the salient points are as follows:

1) Dr. Allen's use of ten year averaging is no coincidence; rather, by fortuitous choice of his start and stop dates and by his decision to average the real temperature data on a ten year time scale, his apparent agreement is much better than his actual agreement.

2) Dr. Allen's forecast fails to replicate the lack of warming observed over the last 15 years. Indeed, his forecast (which began in 1999, well within this "no warming" period) predicts warming throughout. Thus, despite apparent agreement with the ten-year averaged global temperature for 2012, it completely fails to replicate the behavior of the climate over that duration (and thus cannot be considered successful).

3) Nonetheless, Dr. Allen's forecast is the only such forecast produced by the climate science community which even comes this close; all other forecasts predicted much higher levels of warming over the last 15 years.

4) Salient to the discussion MBobrik and I were having, Dr. Allen's forecast consists of a "climate model adjusted by historical observations." Since all of the pure climate models predicted 21st century warming far in excess of that predicted by Dr. Allen, one surmises that his "historical observations" served to adjust the predictions of his climate model downwards - thus belying MBobrik's claim that this particular model is, in itself, predictive.

In addition to these points, I want to further deconstruct this graphic presented by Dr. Allen and reproduced in the Guardian:

-Climate-forecast-and-obs-001.jpg

This graphic is a great illustration of how one can use pictures to obfuscate, rather than illustrate, the issue at hand.

First, let's looks at the difference between the red line (10-year averaged real temperature) versus the dashed black line (Dr. Allen's forecast). Note that Dr. Allen made his forecast in 1999. Find 1999 on the time axes, and trace upwards to where it means the temperature graph: what do you notice? 1999 is about the point of greatest disparity between Dr. Allen's forecast and the actual temperature - and yet that disparity coincides with the same year he made the forecast. (This disparity can be partially explained by his choice to use ten-year averaging, but even so it is still odd.) Next, look at the trend from that point onwards: the trend temperature data is near flat (were it not for the ten year averaging it would be completely flat or slightly negative), and yet his forecast continues on an upwards slope unabated. In the article, Dr. Allen is quoted as trying to explain this discrepancy as "fluctuations around the trend" - despite that his fluctuations persist for the entire forecast period, and it is only the mismatch between the forecast and the observed data at the time the forecast was made which allows the forecast to match the observed result in 2012.

Next, let's examine that gray swath which surrounds the temperature data. What is that swath supposed to mean? It looks like it's supposed to represent error bars, but error bars of what? It can't be the error bars of the forecast, since the forecast didn't exist in 1960. Nor can it be the error bars of the observed temperature, since the error bars grow smaller as you go back in time (and the uncertainty in global temperature grows larger). So, what is it? It's nonsense, input into the graphic by hand, to make the discrepancy between the behavior of the forecast and the observed data appear smaller. This is also why the graphic extends backwards to 1960 and forwards to 2040. Remember, the forecast itself concerns only the time period from 1999 until 2012 (an even smaller time period than shown in the inset) so by including all of those extraneous points, rather than zooming in on the region of interest, the discrepancy appears smaller.

So, basically, the entire figure is designed to produce a misleading perception - a great example of "lies, damned lies, and statistics."

And yet, nevertheless, despite the issues I raised above, this forecast is the only one which comes close to replicating the behavior of the 21st century. All other forecasts miss badly in the direction of greater predicted warming. Thus, what is the basis for claiming that the climate models upon which these forecasts are based have predictive power?

Link to comment
Share on other sites

And yet, nevertheless, despite the issues I raised above, this forecast is the only one which comes close to replicating the behavior of the 21st century. All other forecasts miss badly in the direction of greater predicted warming. Thus, what is the basis for claiming that the climate models upon which these forecasts are based have predictive power?

Thank you for your fully-cited peer-reviewed metanalysis.

Seriously, you're going to say 'all other forecasts fail badly', and we're just supposed to accept it? Just how much of the actual literature have you read? The only actual 'evidence' I see here is a single diagram from a flipping newspaper article.

Link to comment
Share on other sites

I think the crux of your point is exactly that statistics are merely statistics and predictions are merely predictions. With so many variables and confounders in place, the best thing that can be achieved is a mere approximation of climate change, never a certainty. In that light, I love this comic from xkcd for portraying this dilemma:

significant.png

Link to comment
Share on other sites

Seriously, you're going to say 'all other forecasts fail badly', and we're just supposed to accept it?

Ummm, yes?

Can you present me with an example of a forecast which has not failed to replicate the lack of warming over the last 15 years?

Link to comment
Share on other sites

Does this count? (Since you wanted peer-review.)

How about this? Or this?

The lack of global warming over the last fifteen years is not a controversial fact. Even James Hansen has admitted it. I find it surprising that you question this, while also questioning the extent to which I have read the literature.

Link to comment
Share on other sites

Well there you go;

Declining solar insolation as part of a normal eleven-year cycle, and a cyclical change from an El Nino to a La Nina dominate our measure of anthropogenic effects because rapid growth in short-lived sulfur emissions partially offsets rising greenhouse gas concentrations. As such, we find that recent global temperature records are consistent with the existing understanding of the relationship among global surface temperature, internal variability, and radiative forcing, which includes anthropogenic factors with well known warming and cooling effects.

Growth in sulfur (and soot) production due to economic factors causes temporary drop in temperature in the relevant period, a factor not taken into account by previous models. There, there's your answer.

Link to comment
Share on other sites

I thought you might say that. Thanks for admitting my point; all of the models have failed.

Forgive me for not being quite so quick to accept a post-hoc explanation for why the models have failed (written by someone whose livelihood upon those models depends) as gospel truth in the absence of new models with demonstrated predictive power. It's easy to say "a ha, this is why!" It's harder to build a new model demonstrating this conclusively.

Link to comment
Share on other sites

Read a little further. They did exactly that.

Here we use a previously published statistical model (7) to evaluate whether anthropogenic emissions of radiatively active gases, along with natural variables, can account for the 1999–2008 hiatus in warming. To do so, we compile information on anthropogenic and natural drivers of global surface temperature, use these data to estimate the statistical model through 1998, and use the model to simulate global surface temperature between 1999 and 2008. Results indicate that net anthropogenic forcing rises slower than previous decades because the cooling effects of sulfur emissions grow in tandem with the warming effects greenhouse gas concentrations. This slow-down, along with declining solar insolation and a change from El Nino to La Nina conditions, enables the model to simulate the lack of warming after 1998.

Link to comment
Share on other sites

Yes. I know what they did.

What they did is called tuning, and it is a major fallacy within the field of computer modelling. Here's how it works:

Suppose you have a model based on some number of parameters which faithfully replicates some historical data set that you wish to try to model. You use this model to make a prediction, only to find out five or ten years down the line that the prediction was wrong. Thus, naturally, you go back and try to identify what made the model incorrect. You identify a factor that had not been included in your previous model, and, by adding it in, demonstrate that you can now replicate your data set (including the new data) faithfully. "Success!" You say. Not so fast.

Here's the problem. The more parameters a model has, the less predictive power it has. The reason for this is that any arbitrary data set can be matched faithfully by a model with sufficient parameters. Give me 100 points of random data, and I'll give you a 100 parameter model which matches that data exactly (actually, I'll give you an infinite number of 100 parameter models which match that data set exactly and which predict different values for the 101st data point). What are the odds that it matches the 101st data point? Nearly zero (depending on the range from which the random data is chosen).

By adding in new parameters to account for the difference between the original predictions and observation, they have actually produced a model which is even less valid than the model they had before. Now, it might be the case that they have gotten lucky - but there is no reason whatsoever to believe so, simply based on the fact that they can now account for the observed discrepancy.

As I said: "give me new models with demonstrated predictive power." They may have given a new model, but they've yet to demonstrate that model successfully predicting anything.

Edited by Stochasty
Link to comment
Share on other sites

Right. Suuuure.

SI-2.4 Uncertainty About Anthropogenic Sulfur forcing

Uncertainty about anthropogenic sulfur forcing arises from two sources; uncertainty about emissions and uncertainty about the formula used to translate emissions to radiative forcing. One way to evaluate this uncertainty is to compare our estimate for the radiative forcing with published estimates. For example, our method for forecasting sulfur emission in 2005 and converting to those emissions to radiative forcing generate values of the direct (-.269W m–2) and indirect (-0.73 W m–2) effect. Although generated using a very different methodology, our values are close to mean values for the direct (-.4 +0.2 W m–2) and indirect (-0.7 with a 5 to 95% range of –0.3 to –1.8 W m–2) effects published by (21).

If you want to pay $42 to the scumsucking exploiters of academia that are Elsevier to access (21) in order to baselessly accuse those researchers of scientific misconduct as well, be my guest. But otherwise I don't see much point in continuing, if all you have left is to directly accuse people of fabricating data and methods.

Edited by Kryten
Link to comment
Share on other sites

You obviously don't understand what actually goes on in climate modelling, or you wouldn't be responding as you are. I'm not accusing them of fabricating data. I'm accusing them of tuning. I'll try to give you a rundown on what this means to try get you up to speed.

Let's start with the basics: go back to my "100 random data points" example. As I said, I can create an infinite number of models which will replicate any set of 100 data points exactly using 100 parameters. How do I go about it? First, start by choosing a basis set - for our purposes, let's use the Fourier basis (so, the set of cosine functions of arbitrary frequency). From that set, I choose 100 members, arbitrarily. I construct my model as the sum of those 100 elements of my basis set, each multiplied by a constant. Those constants are my set of 100 parameters, and with some simple linear algebra I can chose them so that my model exactly replicates my 100 data points. The "infinite number of models" comes from the fact that my choice of 100 basis elements was arbitrary: I can do this same process for any choice of 100 elements, each generating a unique model. Since my set of 100 data points gives rise to an infinite number if unique models (each with different predictions), then this process has no predictive power.

This is basically what the climate modellers do. They don't use cos functions as their basis - rather, they use things such as solar radiation, atmospheric concentrations of CO2, etc., and other data sets - but they are choosing basis sets nonetheless. Associated with each of these elements is a "forcing" - a constant parameter, whose purpose is exactly the same as in my arbitrary model above. They don't construct quite so simple a model - rather, they use non-linear interactions between elements of their basis sets (each of those non-linearities giving rise to yet more parameters) - but nevertheless the construction proceeds along the same lines. So, then, the process of producing a model is the process of choosing the appropriate values for each forcing. The way they go about it is exactly what I described above: choosing each parameter such that their model replicates historical data. They even admit it themselves: search the literature for references to "training data sets" for each model.

Now, there are a couple of differences between what they do and the arbitrary process I describe above. To their credit, they have fewer parameters than data points, which means their models are somewhat constrained. Furthermore, it is likely that at least some of the basis sets they are using really do have a correlation with observed global temperatures (solar radiation certainly). That doesn't make the situation much better, though. Suppose I were to tell you that I knew of a process which could be described exactly by some finite but unknown number of cos functions. Suppose I then gave you 100, or 1000, or 100,000 data points from that process, and asked you to try to replicate it. Could you do so? The answer is of course not - the system is under-determined. In order to produce a faithful replication I would either have to tell you which basis functions the process was constructed from or I would have to give you a continuum of data points to describe the process.

Nevertheless, this is what the modellers are claiming to be able to do: they have some set of purported basis elements and some finite number of temperature readings and are saying that they can construct a faithful replication. It's an impossible problem, so it's no wonder their results are junk. Oh, sure, they give all sorts of explanations as to why their choices for the forcing parameters are justified, but in the end it comes down to tuning.

In fact, the various models don't even agree on what the various forcing parameters are supposed to be. Do a survey of the models looking for the various forcings and you'll find no agreement between models, despite that they all manage to replicate historical temperatures. That's another artifact of tuning: slightly different choices of model can lead to large differences in the values of each parameter.

Note that I'm expressly not accusing them of falsifying the input data (ie, the observed values for solar radiation, atmospheric concentrations, etc.). That is a completely different issue from tuning, and while there are cases within climate science where I have seen shenanigans played with the data, this is not one of those cases.

Edited by Stochasty
Link to comment
Share on other sites

Modern climate models are quite a bit more complex than that. They've grown a long way with computing power from semi-empirical thermal correlations like you describe into enormous high-resolution global circulation fluid dynamic simulations. There are still semi-empirical correlations contained in the model, but the predictive power of models is well-known to have improved dramatically as computational power has grown to support massively improved resolutions just over the past 10-15 years.

Have you noticed how dramatically the predictive accuracy of hurricane path prediction has improved in the past decade? That's due to innovation in these models. Hurricane prediction is a shorter meteorological time scale than the long-term averaged climate predictions, but much of the underlying modeling techniques are very closely related.

Have a look through http://www.cs.berkeley.edu/~demmel/cs267_Spr12/Lectures/wehner_cs267_2011.ppt

Start at slide 14 to get to the interesting part.

Link to comment
Share on other sites

tavert, I don't disagree with you that meteorological models have vastly improved accuracy compared with a few years ago; nonetheless, they still are not accurate more than only a few days out (weeks, at most) due to the chaotic nature of weather.

The problem with trying to extend this success to the climatological models is that the primary forcings relied upon by the climatological models are still determined by tuning, and make little day-to-day difference in the behavior of the meteorological models (thus, those models cannot be used to test the efficacy of the choice of parameters).

Granted, I'll admit that I'm not entirely current on the state of the newest meteorological models, so this assessment of mine might be slightly out of date. If you happen to know of a study where a change in the choice of a forcing parameter has altered the day-to-day predictive power of (say) the GCM climate model, please let me know. However, I somehow doubt this is the case, because when those models are run for longer duration climate simulations they are intentionally run at low resolution, which would mean day-to-day weather has little to no impact on the tuning of the climate forcings.

EDIT: I went through the powerpoint you linked. Interesting, but it really illustrates my point. They don't have the compute power yet to do a full hydrodynamical simulation of the atmosphere over long durations (although we're getting close), so there's no way yet to link forcings derived from climate models to short-term weather forecasting.

Now, producing a full hydrodynamical simulation of the Earth's atmosphere is the right way to go about this, since we have a pretty good idea of the physics involved in hydrodynamics, so once they get to the point of being able to do that I'll be much more willing to trust the outcomes of such models than I am today (assuming that those models use actual CO2 radiation absorption/emission values, and not tuning-derived forcings).

Edited by Stochasty
Link to comment
Share on other sites

You can never completely eliminate tuning. Show me a single scientist or engineer who does absolutely no tuning at all, I'd be impressed. Climate modeling is not exactly my field so I'm not super familiar with the literature, but everyone knows that uncertainty is an issue. Parametric sensitivity studies and the like will hopefully help narrow these uncertainties as time goes on. How much of this type of analysis has been carried out already? I don't know and I'm not the right person to ask.

Link to comment
Share on other sites

Well, I don't think I've done any tuning in any of the models I've studied (at least, I can't remember having done any, although I've certainly done studies of the parameter spaces of the model), but the situation is not at all analogous between what I do and climate so it doesn't count. I understand what you mean, and to some extent you are right; the problem occurs when the viability of your model depends on tuning, which, for climate science, seems to be the case (at least to the extent I've looked into the issue).

As far as parametric sensitivity studies: I agree that these would help tremendously. I don't know of any such studies having been made, but I'm not a climate scientist. If anyone who happens across this post does know of such, I'd appreciate them being pointed out.

Link to comment
Share on other sites

The great problem with "climate models" is that they are all written under the assumption that positive feedback between CO2 levels and temperature exists, for which there is no scientific evidence whatsoever (indeed, what evidence exists points to a small negative feedback).

Another great problem with them is that they do not take most or all other factors into account that influence the earth. Fluctuations in solar activity are rarely if ever modeled into the models, and if they are only take the 11 year sunspot cycle into account (which has been broken for years...) which of course nicely evens out by using a 10 year average.

Things like the influence of solar wind and other charged particles impacting the atmosphere from space on cloud formation (a real phenomenon) are never even considered (to the defense of the models, this influence wasn't known when the big ones used by the IPCC were first conceived in the 1980s, but coild have been added since and deliberately haven't been because it would reduce or eliminate the warming predicted by the models, warming on which the IPCC and so many in the "scientific community" depend for their continued power and income).

So we have models that claim to accurately predict the far (and even near) future based on data that's either known incorrect, known incomplete, or known to not be understood.

And that by people who can't even look at the sky and predict whether it will rain or not later in the day :huh:

Link to comment
Share on other sites

So, then, the process of producing a model is the process of choosing the appropriate values for each forcing. The way they go about it is exactly what I described above: choosing each parameter such that their model replicates historical data. They even admit it themselves: search the literature for references to "training data sets" for each model.

it's worse than that, they have admitted to changing historical data to match their models, using "the data must have been inaccurately recorded or it would have matched the model" as an excuse.

As a result, the "climatologists" have attempted to scrub the little ice age from the history books, and the early medieval cold period as well.

Link to comment
Share on other sites

I have worked with stellar astrophysics codes involving hydrodynamics and radiative transfer, and I know people who work with stellar modeling codes. I also know other people who work with numerical simulations that involve hydrodynamics, radiative transfer, energy generation, etc. It has always been disconcerting to me how many free parameters there are in these codes... the number of "knobs" that you can twiddle to adjust the relative effectiveness of this that and the other thing. Back when we thought there were 15 billion-year-old stars in our Galaxy, the stellar structure modelers happily were able to make models telling us all about their interiors. Later, when better observations made it clear that such old stars couldn't possibly exist, these modelers said, "no problem," and twiddled their knobs to get results that matched the new observations. And guess what happened later when new measurements showed that the opacities that these guys were using were wrong? Yes...they incorporated the new opacities and twiddled their knobs and were again able to get the answer they knew was correct. Plus...there are many cases where you can get the correct answer with different sets of knob settings. Because if you have too much of THIS in there, then too little of THAT can compensate. And you can't tell which of these sets of settings are correct if you don't have observations that will match one but not the other.

Now don't get the impression that this means that these numerical simulations are useless. Once you have your code settings dialed in so that it gives you the correct answers for stars at the extreme ends of your range of study (and points in between), it's quite possible that what the codes tell you about other stars in between are true.

But I get very leery if anyone takes a complex numerical code and twiddles the knobs to match some known observations....and then proceeds to tell me what is going to happen OUTSIDE of that range. Extrapolation is very iffy in these cases. You might have a set of knob settings that give you good match to all previous data, but that does not mean the settings are correct (you may have faulty settings that are canceling each other out)...and when you try to extrapolate, your incorrect settings may no longer cancel, and you'll get erroneous results.

I'm not a climate modeler, so I can't comment on the state of those particular codes. But I think building more nuclear power plants is the obvious solution to the potential problem, were it not for irrational fears.

Edited by Brotoro
Link to comment
Share on other sites

The great problem with "climate models" is that they are all written under the assumption that positive feedback between CO2 levels and temperature exists, for which there is no scientific evidence whatsoever (indeed, what evidence exists points to a small negative feedback).

Another great problem with them is that they do not take most or all other factors into account that influence the earth. Fluctuations in solar activity are rarely if ever modeled into the models, and if they are only take the 11 year sunspot cycle into account (which has been broken for years...) which of course nicely evens out by using a 10 year average.

No scientific evidence whatsoever for positive feedback? You mean if you disregard the ice-albedo effect, increased water vapour concentration in the atmosphere, deforestation and the release of methane clathrates?

There is indeed a small negative feedback built into the stefan-boltzmann law but the sheer number of positive feedback effects involved in countering it are actually quite staggering in number.

Additionally, for those of you who aren't climatologists (and indeed, I'm not, I'm a physicist), I'd highly recommend looking at the idealised greenhouse model as its a fairly cut-down method of probing the mathematics of climate change at an accessible level. This model predicts a climate sensitivity (effect on temperature of doubling CO2 concentrations) of between 1.2 degrees (raw) to 2.4 degrees (with a crude fix for modelling water vapour concentration). This latter figure being broadly in line with many of the far more advanced climate models predictions, which, from what I've read, seem to be converging around the 2.5-3 degree figure though there are outliers above and below.

In other words, the fact that global temperatures increase in line with rising CO2 concentrations is simply not in question. The only meaningful question to ask is "to what extent does CO2 concentration affect global temperature?" The answer to this question is somewhere between "enough to mildly inconvenience humanity" at the lower end to "enough to substantially inhibit global food production" at the worst.

If we're at the lower end, there is probably sufficient time for superior technologies to burning fossil fuels to displace their modern counterparts before we risk dangerous temperature rises (though I'll note that such an occurance requires climate sensitivity lower than most models presently predict). If we're at the higher end, we're almost out of time to start drastically cutting emissions before dangerous levels of climate change are effectively locked in (and even the average result only gives us a few years of breathing space to start seriously taking some action).

The effect of this is, we can't afford to watch and wait while climatologists produce increasing numbers of more and more comprehensive models that are all effectively saying the same thing. Especially, as Brotoro suggests, we have a perfectly good solution to the problem: building more nuclear power plants. There is absolutely no reason that any western country shouldn't be able to have completely decarbonised their energy sector and be running on 100% nuclear and renewables by 2050 - France has effectively already done it and their emissions are consequently substantially lower and their electricity substantially cheaper than here in the UK despite very similar populations, size of economy, etc.

Link to comment
Share on other sites

Once you have your code settings dialed in so that it gives you the correct answers for stars at the extreme ends of your range of study (and points in between), it's quite possible that what the codes tell you about other stars in between are true.

But I get very leery if anyone takes a complex numerical code and twiddles the knobs to match some known observations....and then proceeds to tell me what is going to happen OUTSIDE of that range. Extrapolation is very iffy in these cases. You might have a set of knob settings that give you good match to all previous data, but that does not mean the settings are correct (you may have faulty settings that are canceling each other out)...and when you try to extrapolate, your incorrect settings may no longer cancel, and you'll get erroneous results.

It's worse even than this, though, since for complicated models it is quite often difficult to determine what qualifies as interpolation and what is actually extrapolation. A good example of this is to consider dynamics in the region of a critical point. Because (by definition) the behavior of a system changes dramatically at a critical point, and because the location and behavior of a critical point within the models often depends strongly on the choice of parameters, the dynamics of the model cannot be trusted in the region of such points even when good quality training data exists on both sides.

I'd highly recommend looking at the idealised greenhouse model as its a fairly cut-down method of probing the mathematics of climate change at an accessible level. This model predicts a climate sensitivity (effect on temperature of doubling CO2 concentrations) of between 1.2 degrees (raw) to 2.4 degrees (with a crude fix for modelling water vapour concentration). This latter figure being broadly in line with many of the far more advanced climate models predictions, which, from what I've read, seem to be converging around the 2.5-3 degree figure though there are outliers above and below.

The problem is that this model (and all of the others) are demonstrably wrong. Not a single model successfully predicted the lack of warming in the 21st century. Thus, there is no reason whatsoever to trust the derived climate sensitivities of those models.

Regarding positive feedbacks, the question isn't whether or not such feedbacks exist (the ones you mentioned are good examples) - the question is whether or not such feedbacks dominate over the known negative feedbacks (such as cloud formation). Judging by the history of the evolution of the Earth's climate, the answer is an emphatic "no."

Positive feedbacks are notorious for generating unstable systems - indeed, it is exactly this instability that the climate science community is relying upon when they make predictions of catastrophic future warming - and yet geological evidence points to the fact that the Earth's climate has been remarkably stable over its history, varying by no more than about 15 degrees C, despite vast changes in atmospheric composition during that period. This is not the behavior of a system characterized by positive feedbacks, but rather of a governed system. Indeed, I have seen evidence in the literature (I'll try to look for a link) that tropical weather systems form exactly such a governing mechanism: earlier onset of cloud formation and more frequent storm systems during warm periods, the reverse during cool periods.

Edited by Stochasty
Link to comment
Share on other sites

"There's an inconsistency between predictions and recent observations" should not translate to "la la la, nothing to see here, ignore everything."

We've only got one planet, one atmosphere with which to live and breathe. We're conducting an enormous uncontrolled experiment with human activity dumping gigatons of CO2 into the atmosphere every year. This is not something we should be continuing to do.

Link to comment
Share on other sites

We're conducting an enormous uncontrolled experiment with human activity dumping gigatons of CO2 into the atmosphere every year. This is not something we should be continuing to do.

This is wrong. The problem with these types of arguments is that they only focus on the potential harms of continuing, while ignoring the very real negative consequences of stopping. They also ignoring the possibility that, if we don't stop, we might yet in the future find a means of both controlling the experiment and avoiding the negative consequences.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...