There are various rational arguments that indicate that climate scientists are overconfident in their own knowledge.
Really? It’s always possible to make plausible-sounding, rational-sounding arguments for almost any proposition, especially when you can formulate them as conditional probabilities. It’s much harder to actually gather the statistics to back those up. I’d like to see these, please.
As a start, there are plenty of study that show that most humans are most of the time overconfident.
Long-Term Capital Management sank because of what their creators considered to be a 10 sigma event. I would guess that climate models quite often use normal distributions as a proxy for things that in 99% of the cases behave the same way as normal distributions.
A second issue is that climate scientists generally validate their models through “hindcasts”. They think making accurate hindcasts is nearly the same as making accurate forecasts.
As a start, there are plenty of study that show that most humans are most of the time overconfident.
Beware fully general counterarguments. In this case, the issue of confidence goes just as well to professional climate scientists as people who aren’t, and then other cognitive biases, such as Dunning-Kruger start becoming relevant.
Ok. That seems like a reasonable argument. So how much of a reduction is warranted may be up in the air then. There’s also a serious denotative v. connotative issue here, since one needs to carefully distinguish the actual statement “Climate scientists are likely overconfident just as almost everyone is” with all the statements made doubting climate science, anthropogenic global warming, etc. If you are only talking about a drop from .99 to .95 (or even from say .99 to .9) that isn’t going to impact policy considerations much.
If you are only talking about a drop from .99 to .95 (or even from say .99 to .9) that isn’t going to impact policy considerations much.
I think it matters when it comes to geoengineering policy making. If the policy community thinks that climate scientists are really good at predicting climate, I think there a good chance that they will go sooner or later for geoengineering.
If we want to stay alive in the next century it would be good if policy makers can distinguish events with 0.9, 0.99 and 0.999 certainty.
Even a 0.001 chance that a given asteroid will extinguish humanity is too high. It’s valuable to keep in mind that small chances happen from time to time and that you have to do scenario planning that integrates them.
Sure, but right now, almost no one is talking about geoengineering as a serious solution. The policy focus right now is much more on carbon dioxide production reduction. So in the context of where these discussions are occurring, these differences will matter. Right now, I’d focus much more on getting policy makers to be able to reliably distinguish something like 0.9 from something like 0.1. In this particular issue, even that is apparently difficult. Getting an ability to appreciate an extremely rough estimate is a much higher policy priority.
Sure, but right now, almost no one is talking about geoengineering as a serious solution.
That a very poor perspective when you care about existential risk. Memes do have effects 10 or 20 years down the road.
It bad to say things that are clearly false like that the evidence for climate change is comparable to that for evolution. Evolution being true is something with much better evidence than p=0.999.
The point of Lesswrong isn’t to focus on ideas with short-term considerations. It’s rather to focus on finding methods to think rationally about issues. It’s about letting a lot of people in their twenties learn those methods. Than when those smart people are in positions of authority when they are in their thirties or forties, you get a payoff.
If scientists lie to the world to get policy makers to make good short-term policy decision that expensive over the long term. Scientists shouldn’t orient themselves towards short-term decision making but focus on staying with the truth.
Really? It’s always possible to make plausible-sounding, rational-sounding arguments for almost any proposition, especially when you can formulate them as conditional probabilities. It’s much harder to actually gather the statistics to back those up. I’d like to see these, please.
As a start, there are plenty of study that show that most humans are most of the time overconfident.
Long-Term Capital Management sank because of what their creators considered to be a 10 sigma event. I would guess that climate models quite often use normal distributions as a proxy for things that in 99% of the cases behave the same way as normal distributions.
A second issue is that climate scientists generally validate their models through “hindcasts”. They think making accurate hindcasts is nearly the same as making accurate forecasts.
Beware fully general counterarguments. In this case, the issue of confidence goes just as well to professional climate scientists as people who aren’t, and then other cognitive biases, such as Dunning-Kruger start becoming relevant.
Overconfidence shouldn’t let us believe that p =0.5. It however would make sense to deduct a few percentage points from the result.
If a climate scientists tell you something is 0.99 likely to be true, maybe it makes sense to treat the event as 0.95 likely to be true.
You don’t need to fully understand how something works to know that someone doesn’t have 0.99 certainity for a claim.
Ok. That seems like a reasonable argument. So how much of a reduction is warranted may be up in the air then. There’s also a serious denotative v. connotative issue here, since one needs to carefully distinguish the actual statement “Climate scientists are likely overconfident just as almost everyone is” with all the statements made doubting climate science, anthropogenic global warming, etc. If you are only talking about a drop from .99 to .95 (or even from say .99 to .9) that isn’t going to impact policy considerations much.
I think it matters when it comes to geoengineering policy making. If the policy community thinks that climate scientists are really good at predicting climate, I think there a good chance that they will go sooner or later for geoengineering.
If we want to stay alive in the next century it would be good if policy makers can distinguish events with 0.9, 0.99 and 0.999 certainty.
Even a 0.001 chance that a given asteroid will extinguish humanity is too high. It’s valuable to keep in mind that small chances happen from time to time and that you have to do scenario planning that integrates them.
Sure, but right now, almost no one is talking about geoengineering as a serious solution. The policy focus right now is much more on carbon dioxide production reduction. So in the context of where these discussions are occurring, these differences will matter. Right now, I’d focus much more on getting policy makers to be able to reliably distinguish something like 0.9 from something like 0.1. In this particular issue, even that is apparently difficult. Getting an ability to appreciate an extremely rough estimate is a much higher policy priority.
That a very poor perspective when you care about existential risk. Memes do have effects 10 or 20 years down the road.
It bad to say things that are clearly false like that the evidence for climate change is comparable to that for evolution. Evolution being true is something with much better evidence than p=0.999.
The point of Lesswrong isn’t to focus on ideas with short-term considerations. It’s rather to focus on finding methods to think rationally about issues. It’s about letting a lot of people in their twenties learn those methods. Than when those smart people are in positions of authority when they are in their thirties or forties, you get a payoff.
If scientists lie to the world to get policy makers to make good short-term policy decision that expensive over the long term. Scientists shouldn’t orient themselves towards short-term decision making but focus on staying with the truth.
Go, start reading
http://wattsupwiththat.com/
http://climateaudit.org/