You’re Entitled to Arguments, But Not (That Particular) Proof
Followup to: Logical Rudeness
“Modern man is so committed to empirical knowledge, that he sets the standard for evidence higher than either side in his disputes can attain, thus suffering his disputes to be settled by philosophical arguments as to which party must be crushed under the burden of proof.”
-- Alan Crowe
There’s a story—in accordance with Poe’s Law, I have no idea whether it’s a joke or it actually happened—about a creationist who was trying to claim a “gap” in the fossil record, two species without an intermediate fossil having been discovered. When an intermediate species was discovered, the creationist responded, “Aha! Now there are two gaps.”
Since I’m not a professional evolutionary biologist, I couldn’t begin to rattle off all the ways that we know evolution is true; true facts tend to leave traces of themselves behind, and evolution is the hugest fact in all of biology. My specialty is the cognitive sciences, so I can tell you of my own knowledge that the human brain looks just like we’d expect it to look if it had evolved, and not at all like you’d think it would look if it’d been intelligently designed. And I’m not really going to say much more on that subject. As I once said to someone who questioned whether humans were really related to apes: “That question might have made sense when Darwin first came up with the hypothesis, but this is the twenty-first century. We can read the genes. Human beings and chimpanzees have 95% shared genetic material. It’s over.”
Well, it’s over, unless you’re crazy like a human (ironically, more evidence that the human brain was fashioned by a sloppy and alien god). If you’re crazy like a human, you will engage in motivated cognition; and instead of focusing on the unthinkably huge heaps of evidence in favor of evolution, the innumerable signs by which the fact of evolution has left its heavy footprints on all of reality, the uncounted observations that discriminate between the world we’d expect to see if intelligent design ruled and the world we’d expect to see if evolution were true...
...instead you search your mind, and you pick out one form of proof that you think evolutionary biologists can’t provide; and you demand, you insist upon that one form of proof; and when it is not provided, you take that as a refutation.
You say, “Have you ever seen an ape species evolving into a human species?” You insist on videotapes—on that particular proof.
And that particular proof is one we couldn’t possibly be expected to have on hand; it’s a form of evidence we couldn’t possibly be expected to be able to provide, even given that evolution is true.
Yet it follows illogically that if a video tape would provide definite proof, then, likewise, the absence of a videotape must constitute definite disproof. Or perhaps just render all other arguments void and turn the issue into a mere matter of personal opinion, with no one’s opinion being better than anyone else’s.
So far as I can tell, the position of human-caused global warming (anthropogenic global warming aka AGW) has the ball. I get the impression there’s a lot of evidence piled up, a lot of people trying and failing to poke holes, and so I have no reason to play contrarian here. It’s now heavily politicized science, which means that I take the assertions with a grain of skepticism and worry—well, to be honest I don’t spend a whole lot of time worrying about it, because (a) there are worse global catastrophic risks and (b) lots of other people are worrying about AGW already, so there are much better places to invest the next marginal minute of worry.
But if I pretend for a moment to live in the mainstream mental universe in which there is nothing scarier to worry about than global warming, and a 6 °C (11 °F) rise in global temperatures by 2100 seems like a top issue for the care and feeding of humanity’s future...
Then I must shake a disapproving finger at anyone who claims the state of evidence on AGW is indefinite.
Sure, if we waited until 2100 to see how much global temperatures increased and how high the seas rose, we would have definite proof. We would have definite proof in 2100, however, and that sounds just a little bit way the hell too late. If there are cost-effective things we can do to mitigate global warming—and by this I don’t mean ethanol-from-corn or cap-and-trade, more along the lines of standardizing on a liquid fluoride thorium reactor design and building 10,000 of them—if there’s something we can do about AGW, we need to do it now, not in a hundred years.
When the hypothesis at hand makes time valuable—when the proposition at hand, conditional on its being true, means there are certain things we should be doing NOW—then you’ve got to do your best to figure things out with the evidence that we have. Sure, if we had annual data on global temperatures and CO2 going back to 100 million years ago, we would know more than we do right now. But we don’t have that time-series data—not because global-warming advocates destroyed it, or because they were neglectful in gathering it, but because they couldn’t possibly be expected to provide it in the first place. And so we’ve got to look among the observations we can perform, to find those that discriminate between “the way the world could be expected to look if AGW is true / a big problem”, and “the way the world would be expected to look if AGW is false / a small problem”. If, for example, we discover large deposits of frozen methane clathrates that are released with rising temperatures, this at least seems like “the sort of observation” we might be making if we live in the sort of world where AGW is a big problem. It’s not a necessary connection, it’s not sufficient on its own, it’s something we could potentially also observe in a world where AGW is not a big problem—but unlike the perfect data we can never obtain, it’s something we can actually find out, and in fact have found out.
Yes, we’ve never actually experimented to observe the results over 50 years of artificially adding a large amount of carbon dioxide to the atmosphere. But we know from physics that it’s a greenhouse gas. It’s not a privileged hypothesis we’re pulling out of nowhere. It’s not like saying “You can’t prove there’s no invisible pink unicorn in my garage!” AGW is, ceteris paribus, what we should expect to happen if the other things we believe are true. We don’t have any experimental results on what will happen 50 years from now, and so you can’t grant the proposition the special, super-strong status of something that has been scientifically confirmed by a replicable experiment. But as I point out in “Scientific Evidence, Legal Evidence, Rational Evidence”, if science couldn’t say anything about that which has not already been observed, we couldn’t ever make scientific predictions by which the theories could be confirmed. Extrapolating from the science we do know, global warming should be occurring; you would need specific experimental evidence to contradict that.
We are, I think, dealing with that old problem of motivated cognition. As Gilovich says: “Conclusions a person does not want to believe are held to a higher standard than conclusions a person wants to believe. In the former case, the person asks if the evidence compels one to accept the conclusion, whereas in the latter case, the person asks instead if the evidence allows one to accept the conclusion.” People map the domain of belief onto the social domain of authority, with a qualitative difference between absolute and nonabsolute demands: If a teacher tells you certain things, and you have to believe them, and you have to recite them back on the test. But when a student makes a suggestion in class, you don’t have to go along with it—you’re free to agree or disagree (it seems) and no one will punish you.
And so the implicit emotional theory is that if something is not proven—better yet, proven using a particular piece of evidence that isn’t available and that you’re pretty sure is never going to become available—then you are allowed to disbelieve; it’s like something a student says, not like something a teacher says.
You demand particular proof P; and if proof P is not available, then you’re allowed to disbelieve.
And this is flatly wrong as probability theory.
If the hypothesis at hand is H, and we have access to pieces of evidence E1, E2, and E3, but we do not have access to proof X one way or the other, then the rational probability estimate is the result of the Bayesian update P(H|E1,E2,E3). You do not get to say, “Well, we don’t know whether X or ~X, so I’m going to throw E1, E2, and E3 out the window until you tell me about X.” I cannot begin to describe how much that is not the way the laws of probability theory work. You do not get to screen off E1, E2, and E3 based on your ignorance of X!
Nor do you get to ignore the arguments that influence the prior probability of H—the standard science by which, ceteris paribus and without anything unknown at work, carbon dioxide is a greenhouse gas and ought to make the Earth hotter.
Nor can you hold up the nonobservation of your particular proof X as a triumphant refutation. If we had time cameras and could look into the past, then indeed, the fact that no one had ever “seen with their own eyes” primates evolving into humans would refute the hypothesis. But, given that time cameras don’t exist, then assuming evolution to be true we don’t expect anyone to have witnessed humans evolving from apes with our own eyes, for the laws of natural selection require that this have happened far in the distant past. And so, once you have updated on the fact that time cameras don’t exist—computed P(Evolution|~Camera) - and the fact that time cameras don’t exist hardly seems to refute the theory of evolution—then you obtain no further evidence by observing ~Video, i.e., P(Evolution|~Video,~Camera) = P(Evolution|~Camera). In slogan-form, “The absence of unobtainable proof is not even weak evidence of absence.” See appendix for details.
(And while we’re on the subject, yes, the laws of probability theory are laws, rather than suggestions. It is like something the teacher tells you, okay? If you’re going to ignore the Bayesian update you logically have to perform when you see a new piece of evidence, you might as well ignore outright mathematical proofs. I see no reason why it’s any less epistemically sinful to ignore probabilities than to ignore certainties.)
Throwing E1, E2 and E3 out the window, and ignoring the prior probability of H, because you haven’t seen unobtainable proof x; or holding up the nonobservation of X as a triumphant refutation, when you couldn’t reasonably expect to see X even given that the underlying theory is true; all this is more than just a formal probability-theoretic mistake. It is logically rude.
After all—in the absence of your unobtainable particular proof, there may be plenty of other arguments by which you can hope to figure out whether you live in a world where the hypothesis of interest is true, or alternatively false. It takes work to provide you with those arguments. It takes work to provide you with extrapolations of existing knowledge to prior probabilities, and items of evidence with which to update those prior probabilities, to form a prediction about the unseen. Someone who does the work to provide those arguments is doing the best they can by you; throwing the arguments out the window is not just irrational, but logically rude.
And I emphasize this, because it seems to me that the underlying metaphor of demanding particular proof is to say as if, “You are supposed to provide me with a video of apes evolving into humans, I am entitled to see it with my own eyes, and it is your responsibility to make that happen; and if you do not provide me with that particular proof, you are deficient in your duties of argument, and I have no obligation to believe you.” And this is, in the first place, bad math as probability theory. And it is, in the second place, an attitude of trying to be defensible rather than accurate, the attitude of someone who wants to be allowed to retain the beliefs they have, and not the attitude of someone who is honestly curious and trying to figure out which possible world they live in, by whatever signs are available. But if these considerations do not move you, then even in terms of the original and flawed metaphor, you are in the wrong: you are entitled to arguments, but not that particular proof.
Ignoring someone’s hard work to provide you with the arguments you need—the extrapolations from existing knowledge to make predictions about events not yet observed, the items of evidence that are suggestive even if not definite and that fit some possible worlds better than others—and instead demanding proof they can’t possibly give you, proof they couldn’t be expected to provide even if they were right—that is logically rude. It is invalid as probability theory, foolish on the face of it, and logically rude.
And of course if you go so far as to act smug about the absence of an unobtainable proof, or chide the other for their credulity, then you have crossed the line into outright ordinary rudeness as well.
It is likewise a madness of decision theory to hold off pending positive proof until it’s too late to do anything; the whole point of decision theory is to choose under conditions of uncertainty, and that is not how the expected value of information is likely to work out. Or in terms of plain common sense: There are signs and portents, smoke alarms and hot doorknobs, by which you can hope to determine whether your house is on fire before your face melts off your skull; and to delay leaving the house until after your face melts off, because only this is the positive and particular proof that you demand, is decision-theoretical insanity. It doesn’t matter if you cloak your demand for that unobtainable proof under the heading of scientific procedure, saying, “These are the proofs you could not obtain even if you were right, which I know you will not be able to obtain until the time for action has long passed, which surely any scientist would demand before confirming your proposition as a scientific truth.” It’s still nuts.
Since this post has already gotten long, I’ve moved some details of probability theory, the subtext on cryonics, the sub-subtext on molecular nanotechnology, and the sub-sub-subtext on Artificial Intelligence, into:
- Undiscriminating Skepticism by 14 Mar 2010 23:23 UTC; 137 points) (
- What Bayesianism taught me by 12 Aug 2013 6:59 UTC; 114 points) (
- Bayesianism for Humans by 29 Oct 2013 23:54 UTC; 91 points) (
- 20 Aug 2010 12:05 UTC; 58 points) 's comment on The Importance of Self-Doubt by (
- Demands for Particular Proof: Appendices by 15 Feb 2010 7:58 UTC; 40 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- 13 Jun 2011 16:09 UTC; 20 points) 's comment on Rewriting the sequences? by (
- 6 Aug 2012 5:02 UTC; 19 points) 's comment on Self-skepticism: the first principle of rationality by (
- 10 May 2020 6:24 UTC; 16 points) 's comment on Is there any scientific evidence for benefits of meditation? by (
- How to improve the public perception of the SIAI and LW? by 8 Mar 2011 14:48 UTC; 15 points) (
- 28 May 2010 6:38 UTC; 12 points) 's comment on Abnormal Cryonics by (
- 7 Jul 2010 8:14 UTC; 11 points) 's comment on Cryonics Wants To Be Big by (
- 8 Apr 2023 21:32 UTC; 11 points) 's comment on LW Team is adjusting moderation policy by (
- 9 Dec 2010 17:34 UTC; 11 points) 's comment on Best career models for doing research? by (
- 2 Feb 2012 20:09 UTC; 10 points) 's comment on Looking for information on cryonics by (
- 30 Dec 2011 17:02 UTC; 9 points) 's comment on Stupid Questions Open Thread by (
- 4 Feb 2012 10:33 UTC; 9 points) 's comment on Fireplace Delusions [LINK] by (
- 30 Aug 2022 3:56 UTC; 9 points) 's comment on What is the best critique of AI existential risk arguments? by (
- 23 Apr 2011 21:16 UTC; 8 points) 's comment on Ben Goertzel interviews Michael Anissimov regarding existential risk [link] by (
- 8 Dec 2011 18:55 UTC; 8 points) 's comment on What independence between ZFC and P vs NP would imply by (
- 25 Feb 2021 2:32 UTC; 8 points) 's comment on A No-Nonsense Guide to Early Retirement by (
- 14 Nov 2012 10:36 UTC; 8 points) 's comment on Rationality Quotes November 2012 by (
- 2 Nov 2011 13:26 UTC; 7 points) 's comment on Amanda Knox: post mortem by (
- 12 Aug 2010 17:48 UTC; 6 points) 's comment on Should I believe what the SIAI claims? by (
- 2 Feb 2023 14:04 UTC; 6 points) 's comment on Basics of Rationalist Discourse by (
- Asking for a name for a symptom of rationalization by 7 Jan 2023 18:34 UTC; 6 points) (
- 9 Feb 2012 21:54 UTC; 5 points) 's comment on Counter-irrationality by (
- 21 Aug 2010 7:19 UTC; 5 points) 's comment on Other Existential Risks by (
- 24 Oct 2011 21:49 UTC; 5 points) 's comment on Amanda Knox: post mortem by (
- 21 Dec 2011 11:15 UTC; 4 points) 's comment on Is anyone else worried about SOPA? Trying to do anything about it? by (
- 30 May 2023 1:27 UTC; 4 points) 's comment on Sentience matters by (
- 15 Jul 2013 3:02 UTC; 4 points) 's comment on “Stupid” questions thread by (
- Should we discount extraordinary implications? by 29 Dec 2011 14:51 UTC; 4 points) (
- 15 Aug 2010 15:14 UTC; 4 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 19 May 2011 21:30 UTC; 4 points) 's comment on Example decision theory problem: “Agent simulates predictor” by (
- 28 Jul 2022 8:07 UTC; 4 points) 's comment on Which singularity schools plus the no singularity school was right? by (
- 28 Jul 2011 16:18 UTC; 3 points) 's comment on [LINK] Scientists use Bayesian reasoning to update the drake equation for the existence of ET’s by (
- 21 Aug 2010 22:16 UTC; 3 points) 's comment on The Importance of Self-Doubt by (
- 30 May 2023 1:36 UTC; 3 points) 's comment on Sentience matters by (
- 10 May 2020 18:38 UTC; 3 points) 's comment on Is there any scientific evidence for benefits of meditation? by (
- 15 Feb 2010 19:40 UTC; 3 points) 's comment on Boo lights: groupthink edition by (
- 1 May 2018 0:00 UTC; 3 points) 's comment on Give praise by (
- 10 May 2013 16:16 UTC; 3 points) 's comment on Open Thread, May 1-14, 2013 by (
- 21 Jan 2024 11:48 UTC; 3 points) 's comment on Against Nonlinear (Thing Of Things) by (
- 1 Apr 2012 13:18 UTC; 3 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 by (
- 16 Jan 2021 10:50 UTC; 2 points) 's comment on Deconditioning Aversion to Dislike by (
- 5 Sep 2010 18:43 UTC; 2 points) 's comment on Anthropomorphic AI and Sandboxed Virtual Universes by (
- 16 Apr 2023 6:32 UTC; 2 points) 's comment on Moderation notes re: recent Said/Duncan threads by (
- 26 Jul 2010 20:28 UTC; 2 points) 's comment on Welcome to Less Wrong! by (
- 16 Mar 2010 22:11 UTC; 2 points) 's comment on The problem of pseudofriendliness by (
- 6 Jan 2020 14:41 UTC; 2 points) 's comment on Might humans not be the most intelligent animals? by (
- 6 May 2012 18:12 UTC; 2 points) 's comment on Muehlhauser-Goertzel Dialogue, Part 2 by (
- 18 May 2023 16:57 UTC; 2 points) 's comment on Deontological Norms are Unimportant by (
- 29 Jul 2022 14:40 UTC; 2 points) 's comment on Which singularity schools plus the no singularity school was right? by (
- 21 Aug 2010 8:51 UTC; 1 point) 's comment on The Importance of Self-Doubt by (
- 26 Jul 2010 20:29 UTC; 1 point) 's comment on Welcome to Less Wrong! by (
- 13 Nov 2011 17:32 UTC; 1 point) 's comment on Q&A with new Executive Director of Singularity Institute by (
- 22 Sep 2010 3:02 UTC; 1 point) 's comment on Open Thread, September, 2010-- part 2 by (
- 3 Nov 2016 17:08 UTC; 0 points) 's comment on LW should go into mainstream academia ? by (
- 20 Jan 2012 6:21 UTC; 0 points) 's comment on Histocracy: Open, Effective Group Decision-Making With Weighted Voting by (
- 12 Aug 2012 20:13 UTC; 0 points) 's comment on Politics Discussion Thread August 2012 by (
- 13 Aug 2012 19:14 UTC; 0 points) 's comment on What is moral foundation theory good for? by (
- 15 Jan 2024 3:44 UTC; 0 points) 's comment on Against most, but not all, AI risk analogies by (
- 7 Jul 2010 8:27 UTC; 0 points) 's comment on Cryonics Wants To Be Big by (
- 14 Aug 2013 21:25 UTC; 0 points) 's comment on Engaging Intellectual Elites at Less Wrong by (
- 14 Nov 2012 10:49 UTC; 0 points) 's comment on Rationality Quotes November 2012 by (
- 8 Feb 2014 1:31 UTC; -1 points) 's comment on True numbers and fake numbers by (
- 13 Apr 2010 16:51 UTC; -1 points) 's comment on Of Exclusionary Speech and Gender Politics by (
- 22 Jul 2014 5:11 UTC; -1 points) 's comment on Open thread, July 21-27, 2014 by (
- 17 May 2011 17:01 UTC; -1 points) 's comment on Rationalists don’t care about the future by (
- 27 May 2010 16:22 UTC; -1 points) 's comment on On Enjoying Disagreeable Company by (
- Is LessWrong dead without Cox’s theorem? by 4 Sep 2021 5:45 UTC; -2 points) (
- 8 Feb 2014 3:52 UTC; -5 points) 's comment on True numbers and fake numbers by (
I think peoples’ decision about whether to accept or resist the AGW proposition is being complicated by an implicit negotiation over political power that’s inevitably attached to that decision.
Because the scientific projections are still vague, people feel as if their decision about whether to believe in AGW is underdetermined by the evidence, in such a way that political actors in the future will feel entitled to retrospectively interpret their decision for purposes of political precedent. (“Were they forced by the evidence, or did they feel weak enough that they made a concession they didn’t have to make?”) And the precedent won’t be induced in terms of the mental states that a perfect decision theorist, thinking about the AGW mitigation decision problem, would have had. The precedent will be in terms of the mental states that a normal non-scientifically-trained (but politically active) human would have had. One of those mental states would be uncertainty about whether scientists (unconsciously intuited as potentially colluding with, and/or hoping to become, power-grubbing environmental regulators) are just making AGW up. In that context, agreeing that AGW is probably real feels like ceding one’s right of objection to whatever seizures of power someone’s found some vague scientific way of justifying.
It becomes a signaling game, in which each choice of belief will be understood as exactly how you would communicate a particular choice of political move, and the costs of making the wrong political move feel very high. So the belief decisions and the political actions become tangled up.
Roughly, people have no way of saying:
So instead, they say:
If it were possible to negotiate separately about AGW action and about precedents of policy concessions to e.g. scientists’ claims, then you might see less decision-theoretic insanity around the AGW action question itself.
(Note—most of this analysis is not on the basis of such data as opinion polls or controlled studies. It’s just from introspecting on my experience of attempting to empathize with the state of mind of AGW disputants, as recalled mostly from Internet forums.)
With an additional decade of political battles to scrutinize, I see this sort of thing playing out with things like immigration policy, and possibly COVID policy, too.
From what I can gather, there are plenty of Republicans who would be willing to make a one-time amnesty concession in exchange for securing the border. However, Republican politicians are aware that if they give any ground on amnesty in this particular case, then Democratic politicians are very likely to 1) drag their feet on the securing-the-border part of the deal, and then 2) cite the previous amnesty policy as precedent for future amnesty policies in the court of public opinion.
I was wondering how long it would be until the AGW issue was directly broached on a top-level post. Here I will state my views on it.
First, I want to fend off the potential charge of motivated cognition. I have spent the better part of two years criticizing fellow “libertarians” for trivializing the issue, and especially for their rationalizations of “Screw the Bengalis” even when they condition on AGW being true. I don’t have the links gathered in one place, but just look here and here, and linked discussions, for examples.
That said, here are the warning signs for me (this is just to summarize, will gather links later if necessary):
1) Failed predictions. Given the complexity of the topic, your models inevitably end up doing curve-fitting. (Contrary to a popular misconception, they do not go straight from “the equations they design planes from” to climate models.) That gives you significant leeway in fitting the data to your theory. To be scientific and therefore remove the ability of humans to bias the data, it is vital that model predictions be validated against real-world results. They’ve failed, badly: they predicted, by existing measures of “global temperature”, that it would be much higher than it is now.
2) Anti-Bayesian methodology accepted as commonplace. As an example, regarding the “hide the decline” issue with the tree rings, here’s what happened: Scientists want to know how hot it was millenia ago. Temperature records weren’t kept then. So, they measure by proxies. One common proxy is believed to be tree rings. But tree rings don’t match the time period in which we have the best data.
The correct procedure at this point is to either a) recognize that they aren’t good proxies, or b) include them in toto as an outlier data point. Instead, what they do is to keep all the data points that support the theory, and throw out the rest, calling it a “divergence problem”, and further, claim the remaining points as additional substantiation of the theory. Do I need to explain here what’s wrong with that?
And yet the field completely lacks journals with articles criticizing this.
3) Error cascades. Despite the supposed independence of the datasets, they ultimately come from only a few interbred sources, and further data is tuned so that it matches these data sets. People are kept out of publication, specifically on the basis that their data contradicts the “correct” data.
Finally, you can’t just argue, “The scientists believe AGW, I trust scientists, ergo, the evidence favors AGW.” Science is a method, not a person. AGW is credible to the exent that there is Bayesian evidence for it, and to the extent scientists are following science and finding Bayesian evidence. The history of the field is a history of fitting the data to the theory and increasing pressure to make sure your data conforms to what the high-status people decreed is correct.
Again, if the field is cleansed and audited and the theory turns out to hold up and be a severe problem, I would love for CO2 emissions to finally have their damage priced in so that they’re not wastefully done, and I pity the fools that demand Bengalis go and sue each emitter if they want compensation. But that’s not where we are.
And I don’t think it’s logically rude to demand that the evidence adhere to the standard safeguards against human failings.
http://www.overcomingbias.com/2009/11/its-news-on-academia-not-climate.html
People are crazy, the world is mad. Of course there’s gross misbehavior by climate scientists, just like the rest of academia is malfunctioning. But the amount of scrutiny leveled on climate science is vastly greater than the amount of scrutiny leveled on, say, the dietary scientists who randomly made up the idea that saturated fat was bad for you; and the scrutiny really hasn’t turned up anything that bad, just typical behavior by “working” scientists. So I doubt that this is one of the cases where the academic field is just grossly entirely wrong.
It just occurred to me that this really needs to be the title of a short popular book on heuristics and biases.
The book title had already occurred to me, but it shouldn’t be the first book in the series.
A good related video:
http://www.ted.com/talks/sendhil_mullainathan.html
http://en.wikipedia.org/wiki/Saturated_fat#Saturated_fat_intake_and_disease_-_Claimed_associations
...doesn’t look as though scientists were “randomly making things” up to me.
But what there saying fails to account for a lot of data. They’re ignoring it.
A popular article (w/Seth Roberts) covering the issue: http://freetheanimal.com/2009/09/saturated-fat-intake-vs-heart-disease-stroke.html
2010 Harvard School of Public Health (intervention/meta-analysis): Meta-analysis of prospective cohort studies evaluating the association of saturated fat with cardiovascular disease
Saturated fat, carbohydrate, and cardiovascular disease
Another meta-analysis: The questionable role of saturated and polyunsaturated fatty acids in cardiovascular disease
Population Studies: Cardiovascular disease in the masai
Cholesterol, coconuts, and diet on Polynesian atolls: a natural experiment: the Pukapuka and Tokelau island studies
Cardiovascular event risk in relation to dietary fat intake in middle-aged individuals: data from The Malmö Diet and Cancer Study
I am not particularly interested in a discussion of the virtues of saturated fat. It certainly seems like a bad example of scientists randomly making things up, though.
FWIW, here is a reasonably well-balanced analyisis of the 2010 study you mentioned:
“Study fails to link saturated fat, heart disease”
http://www.reuters.com/article/idUSTRE61341020100204
If you look at guidance on saturated fat it often recommends replacing it with better fats—e.g.:
“You should replace foods high in saturated fats with foods high in monounsaturated and/or polyunsaturated fats.”
http://www.americanheart.org/presenter.jhtml?identifier=3045790
Epidemiological studies no-doubt include many who substituted saturated fats with twinkies.
Where does the “guidance” come from? You can’t cite “guidance” as evidence against the proposition that dietary scientists were making stuff up.
I was explaining a problem with studies like the one cited—in exploring the hypotheses that saturated fats are inferior to various other fats. Basically, they don’t bear on those hypotheses.
In this particular case, the authors pretty clearly stated that: “More data are needed to elucidate whether CVD risks are likely to be influenced by the specific nutrients used to replace saturated fat.”
Yes, and I expect that if you put this much scrutiny on most fields, where they are well-protected from falsification, you’d find the same thing. Like you said, scientists aren’t usually trained in the rationalist arts, and can keep bad ideas alive much longer than they should be.
But this doesn’t mean we should just shrug it off as “just the way it works”; we should appropriately discount their evidence for having a less reliable truth-finding procedure if we’re not already assuming as much.
Another difference is that climate scientists are deriving lots and lots of attention, funding, and prestige out of worldwide concern for global warming.
True—they seem ignorant of the “politics is the mind-killer” phenomenon. A boring research field may yield reliable science—but once huge sums of money start to depend on its findings, you have to spend proportionally more effort keeping out bias—such as by making your findings impossible to fake (i.e. no black-box methods for filtering the raw data).
Which climate researchers failed at tremendously.
How thorough is your knowledge of the AGW literature, Silas? I’m only familiar with bits and pieces of it, much of it filtered through sties like Real Climate, but what I’ve seen suggests that climate scientists are doing better than you indicate. For instance, the paper described here includes estimates excluding tree ring data as well as estimates that include tree ring data, because of questions about the reliability of that data (and it cites a bunch of other articles that have addressed that issue). They also describe methods for calibrating and validating proxy data that I haven’t tried to understand, but which seem like the sort of thing that they should be doing.
I think the narrow issue of multi-proxy studies teaches an interesting lesson to folks who like to think of things in terms of Bayesian probabilities.
I would submit that at a bare minimum, any multi-proxy study (such as the one you cite) needs to provide clear inclusion and exclusion criteria for the proxies which are used and not used.
Let’s suppose that there is a universe of 300 possible temperature proxies which can be used and Michael Mann chooses 30 for his paper. If he does not explain to us how he chose those 30, then how can anyone have any confidence in his results?
I haven’t read the paper myself, but here’s what the infamous Steve McIntyre says:
Yes, I’ve followed Real Climate, on and off, and with greater intensity after the Freakonomics fiasco (where RCers were right because of how sloppy the Freakons were), which directly preceded climategate. FWIW, I haven’t been impressed with how they handle stuff outside their expertise, like the time-discounting issue.
As for the paper you mention, my primary concern is not that the tree data by itself overturns everything, but rather, that they consider it a valid method to clip out disconfirmatory data while still counting the remainder as confirmatory, which makes me wonder how competent the rest of the field is.
The responses on RC about the tree ring issue reek of “missing the point”:
Not using the data at all would be appropriate (or maybe not, since you should include disconfirmatory data points). Including only the data points that agree with you would be very inappropriate, as they certainly can’t count as additional proof once they’re filtered for agreement with the theory.
I’m growing less clear about what your complaint is. If you’re just pointing out a methodological problem in that one paper then I agree with you. If you’re claiming that the whole field is so messed up that no one even realizes it’s a problem, then the paper that I linked looks like a counterexample to your claim. The authors seem to recognize that it’s bad to make ad hoc choices about which proxies to use or which years to apply them to, so they came up with a systematic procedure for selecting proxies (it looks similar to taking all of every proxy that correlates significantly with the 150 years of instrumental temperature records and then averaging those proxy estimates together, but more complicated). And because tree-ring data had been the most problematic (in having a poor fit with the temperature record), they ran a separate set of analyses that excluded those data. They may not explicitly criticize the other methodology, but they’re replacing it with a better methodology, which is good enough for me.
You don’t understand why I’m suspicious that a fundamental problem with their methodology, widely used as proof, is only being rooted out in 2008?
Be glad it’s happening at all.
Is it only being rooted out in 2008? There have been a bunch of different proxy reconstructions over the years—are you saying that this 2008 paper was the first one to avoid that methodological problem? Do you know the climate literature well enough to be making these kinds of statements?
There are several factors that can limit] tree growth. Sometimes, low temperature is the bottleneck. So, the tree ring data can in any case be considered a reliable indicator of a floor on the temperature. It isn’t any colder than this point.
They try to pick trees that are more likely to find low temperature the bottleneck. Sometimes it isn’t.
That doesn’t mean that the whole series is useless, even if they happen to be using it wrong (and I don’t know that they are).
It isn’t logically rude to criticize a science. Though in fairness to climate science I think nearly every science routinely makes errors similar to the ones you mention. That said, we shouldn’t take this information and conclude that AGW is probably false.. Scientists should be Bayesians and the fact that they’re not is evidence against what they believe… but it isn’t strong enough evidence to reverse the evidence we get from the fact that they’re still scientists.
Would you clarify this? That seems on its face to be a very strong, which is to say improbable, claim.
The first hit on Google scholar for climate “divergence problem” turns up this: On the ‘Divergence Problem’ in Northern Forests: A review of the tree-ring evidence and possible causes from the journal Global and Planetary Change. From a cursory glance at the abstract, it seems to fit the bill.
I wasn’t saying journals don’t mention the divergence problem, if that’s what you thought. I was saying they don’t criticize the practice of stripping all the data you don’t like from a dataset and then calling the remaining points further substantiation of your theory. It’s this “trick” that is regarded as commonplace in climatology and thus “no big deal”.
There seem to be two kinds of criticism that it’s important to distinguish. On the one hand, there is the following domain-invariant criticism: “It’s wrong to strip data with no motivation other than you don’t like it.” The difficulty with making this criticism is that you have to justify your claim to be able to read the data-stripper’s mind. You need to show that this really was their only motivation. However, although you might have sufficient Bayesian evidence to justify this claim, you probably don’t have enough scientific evidence to convince a journal editor.
On the other hand, there are domain-specific criticisms: “It’s wrong to strip this specific data, and here are domain-specific reasons why it’s wrong: X, Y, and Z.” (E.g., X might be a domain-specific argument that the data probably wasn’t due to measurement error.) It seems much easier to justify this latter kind of criticism at the standards required for a scientific journal.
These considerations are independent of the domain under consideration. I would expect them to operate in other domains besides climate science. For example, I would expect it to be uncommon to find astronomers accusing each other in peer reviewed journals of throwing out data just because they don’t like it, even though I expect that it probably happens just as often as in climatology.
It’s just easier to avoid getting into psychological motivations for throwing data out if you have a theoretic argument for why the data shouldn’t have been thrown out. This seems sufficient to me to explain your observation.
In that case, you should be able to find climatologists openly admitting to throwing out data just because they don’t like it. But the “just because” part rules out all the alleged examples that I’ve seen, including those from the CRU e-mails.
This wasn’t my claim. They may very well have a reason for excluding that data, and were well-intentioned in doing so. It’s just that they don’t understand that when you filter a data set so that it only retains points consistent with theory T, you can’t turn around and use it as evidence of T. And no one ever points this out.
It’s not that they recognize themselves as throwing out data points because they don’t like them; it’s that “well of course these points are wrong—they don’t match the theory!”
Really? You gave me the impression before you hadn’t read them, based on your reaction to the term “divergence problem”. But if you read them, you know that this is what happened: Scientist 1 notices that data set A shows cooling after time t1. Scientist 2 says, don’t worry, just delete the part after t1, but otherwise continue to use the data set; this is a standard technique. (A brilliant idea, even—i.e. “trick”)
It would be one thing if they said, “Clip out points x304 thru x509 because of data-specific problem P related to that span, then check for conformance with theory T.” But here, it was, “Clip out data on the basis of it being inconsistent with T (hopefully we’ll have a reason later), and then cite it as proof of T.” (The remainder was included in a chart attempting to substantiate T.)
Weren’t they filtering out proxy data because it was inconsistent with the (more reliable) data, not with the theory? The divergence problem is that the tree ring proxy diverges from the actual measured temperatures after 1960. The tree ring data show a pretty good fit with the measured temperatures from 1850 or so to 1960, so it seems like they do serve as a decent proxy for temperature, which raises the questions of 1) what to do with the tree ring data to estimate historical temperatures and 2) why this divergence in trends is happening.
The initial response to question 1 was to exclude the post-1960 data, essentially assuming that something weird happened to the trees after 1960 which didn’t affect the rest of the data set. That is problematic, especially since they didn’t have an answer to question 2, but it’s not as bad as what you’re describing. There’s no need to even consider any theory T. And now there’s been a bunch of research into why the divergence happens and what it implies about the proxy estimates, as well as efforts to find other proxies that don’t behave in this weird way.
Again, the problem is not that they threw out a portion of the series. The problem is throwing out a portion of the series and also using the remainder as further substantiation. Yes, the fact that it doesn’t match more reliable measures is a reason to conclude it’s invalid during one particular period; but having decided this, it cannot count as an additional supporting data point.
If the inference flows from the other measures to the tree ring data, it cannot flow back as reinforcement for the other measures.
But if they’re fitting the tree ring data to another data set and not to the theory, then they don’t have the straightforward circularity problem where the data are being tailored to the theory and then used as confirmation of that theory.
I’m starting to think that there’s a bigger inferential gap between us than I realized. I don’t see how tree ring data has been used “as reinforcement for the other measures,” and now I’m wondering what you mean by it being used to further substantiate the theory, and even what the theory is. Maybe it’s not worth continuing off on this tangent here?
Let me try one last time, with as little jargon as possible. Here is what I am claiming happened, and what its implications are:
Most proxies for temperature follow a temperature vs. time pattern of P1.
Some don’t. They adhere to a different pattern, P2, which is just P1 for a while, and then something different.
Scientists present a claim C1: the past history of temperature is that of P1.
Scientists present data substantating C1. Their data is the proxies following P1.
The scientists provide further data to substantiate C1. That data is the proxies following P2, but with the data that are different from P1 trimmed off.
So scientists were using P2, filtered for its agreement with P1, to prove C1.
That is not kosher.
That method was used in major reports.
That method went uncriticized for years after certainty of C1 was claimed.
That merits an epic facepalm regarding the basic reasoning skills of this field.
Does this exposition differe from what you thought I was arguing before?
Then I guess I just disagree with you. Scientists’ belief about the temperature pattern (P1) from 1850 to the present isn’t based on proxies—it’s based on measurements of the temperature which are much more reliable than any proxy. The best Bayesian estimate of the temperature since 1850 gives almost all of the weight to the measurements and very little weight to any other source of evidence (that is especially true over the past 50 years when measurements have been more rigorous, and that is the time period when P1 and P2 differ).
The tree ring proxy was filtered based on its agreement with the temperature measurements, and then used to estimate temperatures prior to 1850, when we don’t have measurements. If you want to think of it as substantiating something, it helped confirm the estimates made with other proxy data sets (other tree rings, ice cores, etc.), and it was not filtered based on its agreement with those other proxies. So I don’t think that the research has the kind of obvious flaw that you’re describing here.
I do think that the divergence problem raises questions which I haven’t seen answered adequately, but I’ve assumed that those questions were dealt with in the climate literature. The biggest issue I have is with using the tree ring proxy to support the claim that the temperatures of the past few decades are unprecedented (in the context of the past 1500 years or so) when that proxy hasn’t tracked the high temperatures over the past few decades. I thought you might have been referring to that with your “further substantiation” comment, and that either you knew enough about the literature to correct my mistaken assumption that it dealt with this problem, or you were overclaiming by that nobody in the field was concerned about this and we could at least get glimpses of the literature that dealt with it. (And I have gotten those glimpses over the past couple days—Wikipedia cites a paper that raises the possibility that tree rings don’t track temperatures above a certain threshold, and the paper I linked shows that they are trying to use proxies that don’t diverge.)
Are we agreed that the rapid rise in CO2 levels, to highs not seen in human history and owing to human intervention, is undisputed fact?
If so, it seems to me that the default extrapolation, from our everyday experience with systems we understand poorly, is that when you turn a dial all the way up without knowing what the heck you’re doing, you won’t like the results. Example include: numerous cases of introducing animal species (bacteria, sheep, wasps) to populations not adapted to them, said populations then suffering upheaval; stock market crashes; losing two space shuttles; and so on.
The burden of proof seems to be on those who insist that yeah, CO2 levels are rising super fast, but don’t worry, it’ll be business as usual (except winters will be nicer and summers will need a little more ice cubes).
Wha...? Is that an argument by surface analogy? Does every increase in every value owing to human intervention lead to a catastrophe? How about internet connectivity? Land committed to agriculture? Air respired by humans? Shoes built? Radio waves transmitted?
How do you even measure the reference classes appropriately?
For some of these examples, yes, there are catastrophic scenarios on record.
Overgrazing in Iceland to name one I’ve seen first-hand. Beaches despoiled by lethal greeen algae in France as a result of intensive pig farming is another. Shoes—that’s perhaps an excessively restricted category, but the Pacific Trash Vortex is one consequence of turning the dial up on manufacturing capacity without adequate control of the consequences. Improved Internet connectivity is having demonstrated, large and undesired effects on industries such as entertainment and newspapers.
Radio waves… no, offhand I can’t think of an issue on record with those, unless EMF sensitivity counts—but I would be hugely surprised if that turned out to be real (i.e. not psychogenic; the discomfort could be real).
You mentioned “failed predictions”, but left those unspecified. OK, here is a list of empirical confirmations of positive feedback loops involving CO2. Arctic ice melt is the one I’d lose sleep over, since the methane sequestered in Arctic ice is a much more powerful greenhouse gas than CO2. Ice melt also has an effect on water salinity which indirectly affects thermohaline circulation.
The causal details of how some of these positive feedbacks could bring about deeply undesirable consequences seem to me to be better established than the details of how runaway AI could lead to the destruction of human values. But I may have more to learn about either.
This isn’t analogy, as in “build something that looks like a bird and it will fly”. More like abstracting away from examples in several categories, to “systems that remain stable tend to be characterized by feedback loops, including both negative feedback (such as the governor) for regulation and positive for growth or excitation”. The latter leads to predictions, e.g. if you observe only one type of feedback in a stable system a search for the other type will generally be fruitful.
For instance, we observe that successful community Web sites tend to become even more successful as enthusiast users take the good news outside. Yet very few sites become very big. We can look for regulatory feedback loops. A good one stems from the joke “Nobody goes to that restaurant anymore, it’s too crowded.” As the audience of a community site increases, its output may become difficult to handle, turning people away feeling overwhelmed. I would predict that LW will run out of new commenters before it runs out of readers, that a lowered influx of new commenters leads to staleness in the contributions of post authors, in turn leading post authors to look elsewhere for stimulation.
Now, perhaps CO2 levels rising through the roof aren’t going to do anything bad. But that’s as much an argument as saying “perhaps I will win the lottery”.
This raises the issue of what exactly people mean by ‘catastrophic’. None of the examples you give are ‘catastrophic’ on anything like the scale of what some prophesize for global warming. I personally think it is a misuse of the word catastrophe to apply it to the situations you describe. If global warming was only forecast to cause problems on that sort of scale then I don’t think anyone would be seriously contemplating the kinds of measures often advocated to mitigate the risk.
The effects of improved Internet connectivity are having large positive effects on the entertainment industries and newspapers from the perspective of most people who aren’t incumbents in those industries, just as technological progress generally benefits societies as a whole while sometimes reducing the income of groups who made their living from the supplanted technologies that preceded them.
That’s because you’re cherry-picking. Having the Gulf Stream stop, one of the possible consequences of Arctic Ice melt, would be very unpleasant.
In other cases the effects we’re seeing are only the start of a chain of effects. The Pacific Trash Vortex is basically us dumping tiny plastic particles into our own food chain, ultimately poisoning ourselves. It’s bad in itself, but the knock-on effects will be worse. Sure, it still pales in comparison to some predicted AGW effects: that’s why the latter has become the more pressing issue.
These examples were direct responses to Silas, who meant to ridicule the initial instances I gave of the class bad things happening as a result of pushing too hard the parameters of systems we understand poorly, on various scales. Many of his own suggestions turn out not to be ridiculous at all, but rather serious matters.
The Gulf Stream makes the difference between Europe and the west coast of North America, not east coasts. Maybe it would be unpleasant, but a catastrophe?
I’ve heard claims that the gulf stream switching off would cause Britain to undergo a climate change that would have consequences I would call ‘catastrophic’, at least in the short term. Some predictions talk about average temperatures dropping by 5-8 C in a matter of months which would have severe consequences for British agriculture and would likely have a noticeable impact on GDP. I’m not sure I put much faith in those predictions however.
This would also be a catastrophe on a different scale from the more alarmist AGW predictions. We’re talking about a major disruption to the British economy but not an existential threat to the human race.
I thought you were using that as an example of a potential catastrophic effect of global warming, whereas I was saying none of your examples of things that have actually happened are what I would call catastrophic. I have heard some predictions of what might happen to the climate in Britain if arctic ice melt caused the gulf stream to stop and if those predictions were to pan out then I think ‘catastrophic’ would be an appropriate word to use for the consequences for Britain.
I don’t disagree that some of the predictions for the consequences of AGW are situations for which the word ‘catastrophic’ is appropriate. My point is that some of these predictions are an entirely different scale of disaster from anything you’ve given as an example of actual consequences of human activity to date. The Pacific Trash Vortex cannot reasonably be described as ‘catastrophic’ in my opinion, though dire predictions may exist that if they transpire might justify such language.
Based on the voting patterns, I’m going astray somewhere. We don’t seem to disagree on the facts (high CO2 levels, past environmental damage) and I’m not seeing arguments directed at my reasoning, beyond the criticism of “surface analogy” that I’ve done my best to adress. So I’ll let this be my final comment on the topic, and hope to find insight in others’ discussion.
We quite agree there hasn’t yet been a catastrophe on the scale predicted for AGW: we wouldn’t be having this conversation if there had been. If you read the original post all over again, you’ll find that was its entire point. Don’t demand that particular proof.
We don’t want to play dictionary games with the word “catastrophe”. One constructive proposal would be to consider the cost to our economies of cleaning up one or the other of these environmental impacts—including their knock-on effects—versus the costs of prevention. We haven’t incurred the costs of the Trash Vortex yet, it’s not making itself felt to you; but it’s nevertheless a fact not a prediction, and we can base estimates on it.
The typical cost of cleaning up an oil spill seems to be on the order of $10M per ton. The Pacific garbage patch may contain as much as 100 million tons of plastic debris. As an order of magnitude estimate, one Trash Vortex appears to be worth one subprime crisis, albeit spread out over a longer period.
We’re clearly in Black Swan territory, and yet this is just one example picked almost at random (in fact, picked from what Silas took to be counterexamples).
Ok, I’ll try and make it more explicit. Your reasoning seems to be that our experience with complex systems that we don’t fully understand is that disrupting them has bad unintended consequences and therefore the burden of proof is on those who suggest that we don’t need to take drastic action to reduce CO2 levels.
I don’t think your conclusion follows from your premise because it seems to me that there are no examples of bad unintended consequences that we haven’t been able to deal with without paying an excessive cost and few examples of bad unintended consequences that even end up with a negative overall economic cost. The only reasonable argument for adopting the kind of drastic and hugely expensive measures necessary to significantly reduce CO2 levels is that the potential effects are so catastrophic that we can’t afford to risk them. There are no examples of similar situations in the past, though as you rightly point out that is not strong evidence that such situations cannot happen since we might not be around to discuss the issue if they had. On the other hand there are lots of examples of dire/catastrophic predictions that have failed to pan out, although in some cases mitigating action has been taken that means we haven’t had the control experiment of doing nothing.
It seems to me that the burden of proof is still very much on those who argue we must take very economically costly actions now because unlike previous problems which have turned out to be relatively cheap to deal with this problem poses a significant risk of genuine catastrophe.
It’s also important to consider the cost of doing nothing and dealing with the consequences. The trash vortex is a problematic example to use here because there have not been any significant bad consequences yet. It may be a fact that it exists but I haven’t found any estimates of the economic cost it is imposing right now and only vague warnings of possible higher pollutant levels in future.
If the cost of doing nothing about CO2 levels were similar to the cost we appear to be paying for doing nothing about the Pacific Trash Vortex then it would be a no brainer to do nothing about CO2 levels.
Ah that particular idea of all human pleasures being harmful for the environment is pretty much religious. It’s not at all what the impact is like.
Computing is basically blameless in the direct sense for global warming. We should probably enjoy it as much as possible. Electricity is good. Trains are good. Holidaying is good.
Airconditioning is bad. Air travel is bad. Short product lifetime is bad.
The situation is far more positive than some make it out to be. Even the direst climate change predictions necessitates drastic changes in some aspects of life.
AGW can’t take away modern medicine or virtual reality from you.
Why do you think “harmful for the environment” means “leading to global warming”? Lots of things are harmful for the environment. Drying swamps to make railroads harm it. Holidaying leads to decreased “old habitat” biodiversity. Building power plants on small mountain rivers leads to decreased biodiversity, too. Yes, these things are good for us. It just has no bearing on whether they are good for nature.
My favorite one: burning wood for heat. Better than fossil fuels for the GW problem, but really bad for local air quality.
Of course, “leading to global warming” is a subset of “harmful for the environment”. Agreed on all counts.
Computing can’t harm the environment in any way—it’s within a totally artificial human space.
The others (“good”) can harm the environment in general, but are much better for AGW.
Well...
You claim there are significant issues with the climate science process, but admit there are no journal articles criticizing the process. If you know enough to find faults with their science, why haven’t you yourself written an article on the matter?
Do you think there is something inherent in the culture of climatology science that introduces these anti-Bayesian biases? Why is climate science subject to this when other sciences are not?
Are you saying the field is systemically politically driven from the top down?
Have you followed the climategate email leak story at all? One of the more damning themes in the leaked emails is the discussion of ways to keep dissenting views out of the peer reviewed journals. One of the stronger arguments used against AGW skeptics was that there were not more papers supporting their claims in peer reviewed journals. Given the prevalence of this argument, clear evidence of efforts to keep ‘dissenting’ opinions out of the main peer reviewed journals is a big problem for the credibility of climate science. For example:
And this comment is also rather damning:
What, specifically, is “damning” about those quotes?
Suppose creationists took over a formerly respected biology journal. Wouldn’t you expect to find quotes like the above (with climate sceptics replaced by creationists) from the private correspondence of biologists?
AGW skeptics have often been challenged on the lack of peer reviewed papers in credible climate science journals supporting their arguments. Now it is quite possible that this is the case because skeptical papers have been rejected purely due to being bad science (as is the case with the lack of papers supporting the effectiveness of homeopathy in medical journals). However, the absence of papers from the key journals cannot be treated as independent evidence of the badness of the science if there is a concerted effort by AGW believers to keep such papers out of the journals.
It is legitimate to attack the science the AGW skeptics are doing. It is not legitimate to dismiss the science purely on the basis that they have not been published in peer reviewed journals if there is a concerted effort to keep them out of peer reviewed journals based on their conclusions rather than on their methods. Now I’m sure the AGW believers feel that they are rejecting bad science rather than rejecting conclusions they don’t like but emails like the above certainly make it appear that it is the conclusions as much as the methods that they are actually objecting to.
In my opinion the CRU emails mean that it no longer appears justified to ignore claims by AGW skeptics purely because they have not appeared in a peer reviewed journal. They may still be wrong but there is sufficient evidence of biased selection by the journals to not trust that journal publication is an unbiased signal of scientific quality.
Agreed. “No peer-reviewed publications” is not an argument that I’ve ever used or would use, even in advance of the CRU emails, because of course that is how academia works in general.
For the most part, I don’t think you’re quite answering my question.
You present two explanations for the lack of peer-reviewed articles that are sceptical of the scientific consensus on global warming. The first is that there is unjust suppression of such views. The second is that such scepticism is based on bad science. You say that you think the leaked emails support the first explanation, and that there is sufficient evidence of biased (I’m guessing “biased” means “unmerited by the quality of the science” here) selection by journals. What is that sufficient evidence? More specifically, how does the information conveyed by the leaked emails distinguish between the first and second scenarios?
This addresses my questions, but I was asking for more specifics. Let A = “AGW sceptics are being suppressed from journals without proper evaluation of their science” and B = “AGW sceptics are being suppressed from journals because their science is unsound”. Let E be the information provided by the email leaks. How do you get to the conclusion that the likelihood ratio P(E|A)/P(E|B) is significantly above 1?
Personally I can’t see how the likelihood ratio would be anything but about 1, and it seems to me that those who act if the ratio is significantly greater than 1 are simply ignoring the estimation of P(E|B) because their prior for P(B) is small.
(EDIT: I originally wrote P(A|E) and P(B|E) instead P(E|A) and P(E|B). My text was still, apparently, clear enough that this wrong notation didn’t cause confusion. I’ve now fixed the notation.)
I do think the likelihood ratio is significantly above 1. This is based off reading some of the emails, documents and code comments in the leaks. Here’s a reasonable summary of the emails. It looks like dubious science to me. I find it hard to understand how anyone can claim otherwise unless they are ideologically motivated. If you genuinely can’t see it then I’m not really interested in arguing over minutiae so we’ll just have to leave it at that.
It seems to me that AGW skeptics made a variety of claims that AGW believers dismissed as paranoid: there was a conspiracy to keep skeptical papers out of the journals; there were efforts to damage the careers of climate scientists who didn’t ‘toe the party line’; there were dubious and possibly illegal efforts to keep the original data behind key papers out of the hands of skeptics despite FOI regulations. I did not see many AGW believers prior to the climategate emails saying “Yes, of course all of that happens, that’s just the way science operates in the real world”.
When the CRU leaks became public and substantiated all the ‘paranoid’ claims above, including proof of illegal destruction of emails and data to avoid FOI requests, I find it suspicious when people claim that it doesn’t change their opinions at all. The standard response seems to be “Oh yes, that’s just how science works in the real world. I already knew scientists routinely engage in this sort of behaviour and the degree of such behaviour revealed in the emails is exactly in line with my prior expectations so my probability estimates are unchanged”. That seems highly suspect to me and looks an awful lot like confirmation bias.
You’re still talking about how the e-mails fit into the scenario of fraudulent climate scientists, that is, P(E|A) by my notation. I specifically said that I feel P(E|B) is being ignored by those who claim the e-mails are evidence of misconduct. Your link, for example, mostly lists things like climatologists talking about discrediting journals that publish AGW-sceptical stuff, which is exactly what they would do if they, in good faith, thought that AGW-scepticism is based on quack science. Reading the e-mails and concluding that sceptical papers are being suppressed without merit seems like merely assuming the conclusion.
(Regarding the FOI requests, that might indeed be something that might reasonably set off alarms and significantly reduce P(E|B) - if you believe the sceptics’ commentaries accompanying the relevant quotes. But googling for “mcintyre foi harassment” and doing some reading gives a different story.)
(EDIT: Fixed notation, as in the parent.)
My impression from reading the emails is that different standards are being applied to the AGW skeptics because of their conclusions rather than because of their methods. At the same time there is evidence of data massaging and dubious practices around their own methods in order to match their pre-conceived conclusions. The whole process does not look like the disinterested search for truth that is the scientific ideal.
My P(B|E) would be higher if I read emails that seemed to focus on methodological errors first rather than proceeding from the fact that a journal has published unwelcome conclusions to the proposal that the journal must be boycotted.
I think there’s too much attention paid to the emails, and not enough to all of the publicly available information about the exact same events. Maybe it’s because private communications seem like secret information that contain the hidden truth, or maybe it’s just a cascade effect where everyone focuses on the emails because everyone is focusing on the emails.
The second email that you quoted is in response to the publication of a skeptical article by Soon & Baliunas (2003) in the journal Climate Research which generated a big public controversy among climate scientists. Reactions to that publication include several editors of the journal resigning in protest (and releasing statements about why they resigned), the publisher of the journal writing a letter admitting that the article contained claims that weren’t supported by the evidence (pdf), and a scientific rebuttal to the article being published later that same year. I think that you get a better sense of what happened (and whether climate scientists were reacting to the methods or just the conclusions) by reading accounts written at the time than from the snippets of emails. And of course there’s Wikipedia.
Would you expect to see evolutionary biologists discuss the methodological errors of creationist arguments in private correspondence?
(I don’t think this is the place for this, since I don’t think we’re getting anywhere.)
Upvoted for the parenthetical.
FOI requests? Which ones? Those for proprietary data sets that they weren’t allowed at that time to release, or the FOI requests for information availalble from a public FTP site?
Voted you up not for your particular assessment of P(A|E)/P(B|E) but for using this pattern of assessing evidence to guide the conversation.
For the same reason I haven’t personally solved every injustice: a) time constraints, and b) others are currently raising awareness of this problem.
Other sciences are affected by anti-Bayesian biases, and this will be a tendency in proportion to the difficulty of finding solid evidence that your theory is wrong. Which is why I claim e.g. sociology and literature are mostly a waste of time.
Generally speaking, science is in some ways too strict and some ways not strict enough. Eliezer_Yudkowsky has actually pointed out before the general failure to appropriately teach rationality in the classroom, and so scientists in general aren’t aware of this problem.
Politics, of course, does play a part. When it’s not just about “who’s right” but about “who gets to control resources”, then the biases go into hyperdrive. People aren’t just pointing out problems with your research, they’re fighting for the other team! The goal is then about proving them wrong, not stopping to check whether your theory is correct in the first place. (“Ask whether, not why.”)
I basically agree with SilasBarta. If you look carefully, what’s going on in climate science is absolutely apalling.
One can ask a simple probability question: Given that a climate simulation matches history, what is the probability that it will accurately predict the future?
Another question: What evidence is there that climate simulations are accurate besides the fact that they match history?
And another question: If you take 10 or 15 iffy climate simulations, average them, and then use a bootstrap or equivalent method to produce a 95% confidence interval, are you actually accomplishing anything?
I’m too lazy to write a top-level post about it, but the main problem with AGW as I see it is that most people have reference class of “statements said by people like IPCC and Al Gore, who think that AGW is real, and Kyoto Protocol and similar activities are a good idea”.
One group of people look at pretty solid evidence that AGW is real, and from this and such reference class infers that Kyoto Protocol type actions must also be good.
Another group of people look at pretty solid evidence that Kyoto Protocol is a very bad idea, and from this and this reference class infers that AGW might not be real.
All media show these issues as highly entangled, even though they’re not really (well, if AGW is false, then Kyoto Protocol is almost certainly bad, but all three other combinations are possible).
I have two reference classes—one for AGWers’ statements about climate which I estimate to be almost all true, and another for AGWers’ statements about proper policy which I estimate to be almost all false.
Most of my friends do not believe the scientific consensus that being overweight causes health problems. On both sides of the argument you see the same phenomenon you observe here—people do not draw a distinction between this assertion, and a particular prescription, in this instance “dieting is good for your health”. From what I’ve looked at so far, I’m pretty confident of the first, but much less so of the second.
This is a great example, thanks.
I think you’re exactly right, and the problem is that people are often so partisan that they don’t even think of this or understand it as a possibility. Unfortunately, this problem isn’t just limited to AGW. I see it in many discussions of policy questions, where people argue about which statistics are right instead of saying, “Assume these factual statements are true: what is the proper policy?”
Although I don’t have any references handy, I’ve seen people argue that Kyoto-like changes in our lifestyles are necessary on ethical grounds apart from global warming. More often they’ll simply dismiss any sort of technological solution as a “quick fix” or even as the thing that caused the problem in the first place.
There are quite a few people who would like to abdicate control over the physical world.
What do you mean by “abdicate control over the physical world”?
I fit the profile described here quite well. Feel free to ask (I know I’m 6 years late, but that’s the point of internet forums).
People argue most ridiculous things. If they want to “abdicate control over the physical world” they can simply kill themselves—that’s the only way.
Part of the problem stems from different uses of the word “caution”.
There are a range of possible outcomes for the earth’s climate (and the resulting cost in lives and money) over the next century ranging from “everything will be fine” to “catastrophic”; there is also uncertainty over the costs and benefits of any given intervention. So what should we do?
Some say, “Caution! We don’t know what’s going to happen; let’s not change things too fast. Keep our current policies and behaviors until we know more.”
Others say, “Caution! We don’t know what’s going to happen, and we’re already changing things (the atmosphere) very quickly indeed. We need to move quickly politically and economically in order to slow down that change.”
For most people it seems that caution means: assume things will continue on more or less the same and be careful about changing your behavior, rather than seek to avoid a high risk of catastrophic loss.
Discussions about runaway AI often take a similar turn. People will come up with a list of reasons why they think it might not be a problem: maybe the humain brain already operates near the physical limit of computation; maybe there’s some ineffable quantum magic thingy that you need to get “true AI”; maybe economics will continue to work just like it does in econ 101 textbooks and guarantee a soft transition; maybe it’s just a really hard problem and it will be a very long time before we have to worry about it.
Maybe. But there’s no good reason to believe any of those things are true, and if they aren’t, then we have a serious concern.
Personally, I think it’s like we’re driving blindfolded with the accelerator pressed to the floor. There’s a guy in the other seat who says he can see out the window, and he’s yelling “I think there’s a cliff up ahead—slow down!” We’re suggesting he not be too hasty.
But I can see the other side, too: if we radically changed policy every time some crank declared that doom was at hand, we’d be much worse off.
I have proposed things similar to those you have suggested as arguments against runaway AI, mainly to show how little we do actually understand about what it takes to be intelligent with finite resources.
I wouldn’t use these as arguments that it isn’t going to be a problem, just that working to understand real-world intelligence might be a more practical activity than trying to build safe guards against scenarios we don’t have a strong inside view for.
While I loved this essay, I felt uncomfortable with the vagueness with which the group of “AGW Skeptics” was defined. If we define that group loosely to include every AGW skeptic, then there are obviously rationality impoverished reasons AGW skeptics have for their beliefs, but the same is true for AGW believers. Attacking strawmen gets us nowhere.
A worthy attack on AGW skeptics should be directed at the leading skeptics who have expertise in climatology. They are making very specific scientific claims, such as:
Negative feedback loops in the atmosphere will mostly cushion atmospheric CO2 increases.
Fluctuations in cosmic radiation have been the main driver of warming in the 20th century.
These claims—while I think we have good scientific evidence against them—are not obviously unreasonable. What is unreasonable is the insinuation in the essay that skeptics who are professional climatologists deny the claim “we know from physics that [CO2 is] a greenhouse gas”. They don’t—the real issue that the professionals debate is whether the addition of greenhouse gases will cause a positive or negative feedback (without a positive feedback, the warming from increased CO2 levels is tolerable). The answer to that question requires much more subtle reasoning, and even with the aid of numerous state-of-the-art computer models, the variance in projections is still wide enough to warrant caution in our predictions. To liken an AGW skeptic to a creationist is unjustified, and I mean that in the deepest possible way, i.e. I’d be far more comfortable betting my money in a prediction market to support evolution than to support AGW.
Global Warming Debate:
http://www.takeonit.com/question/5.aspx
OT: Since the reference to Thorium reactors wasn’t linked in the top level post, here are some links for those who are curious:
http://en.wikipedia.org/wiki/Thorium#Thorium_as_a_nuclear_fuel
http://www.youtube.com/watch?v=AZR0UKxNPh8
http://www.theoildrum.com/node/4971
http://thoriumenergy.blogspot.com/
http://thoriumenergyalliance.com/
FWIW I started the “Thorium For Energy” advocacy group on Facebook a while ago. Join it if you can. Most people are simply unaware of this technology.
I also have the thorium energy question on TakeOnIt here: http://www.takeonit.com/question/127.aspx
Not many oppose it simply because not many know it.
Thanks, Michael and Ben, for pulling those details out!
I read, learned, and then joined the Thorium Energy Alliance FB group that was mentioned but not yet linked so I could help with the viral promotion.
Fascinating. My girlfriend is basically getting her degree in alternative energy and she’s never even heard of this. The stuff about how we ended up not using thorium so that we could produce weaponizable material is most upsetting thing I’ve learned in a long time.
The report of creationists deploying the tactic in question (finding a transitional fossil where one was not previously creates two gaps that now need to be filled by other fossils), is not a Poe.
As a very active member of the Richard Dawkins Foundation Forums (RDF), I can tell you that I have seen this ploy used on more occasions than I can count.
This is in addition to people who think that Evolution also means that there should exist the Crockoduck, the Cat-dog, and the Bird-fish (to name just a few), or that Evolution means that Polar Bears had the Color scared off of them (attributed to one Richard Byers, notorious for Stoopid on the RDF. We never discovered what exactly scared Polar Bears white as he was banned for violations of the user’s agreement before he could get around to describing the process of being scared white).
Now, I must read the rest of this article. Just wanted to clear this up. If anyone is interested, I am sure that I could get exact links to the posts on RDF that make such claims. Although they represent the most insipid and fanatic of religious persons in the world, they remain terrifying that anyone could remain so willfully ignorant in the face of experts (there are more than a few real scientists on the RDF in the fields of Paleontology, Evolutionary Biology, Mathematics, Computer Science, and Cog Sci).
I reviewed this topic last May:
When Kevin Dick offered to bet me, I offered even odds that the CO2-temp correlation over the last 60yr would continue over 20yr. Kevin pointed out that this is far less than standard projections, produced by assuming positive feedback models. So the key issue is how much confidence to place in such projected strong feedback. I didn’t have enough confidence in it to bet Kevin in its favor, but not sure how much I’d bet against it either.
This seems like a reasonable summary of the scientific consensus and I’m generally pretty willing to accept the factual elements of the scientific consensus. The judgment elements of the scientific consensus, such as “we should all be dedicated to cutting CO2 emissions” seem very much less reliable. It’s noteworthy that people who accept the facts but not the judgments, like Dyson, are often called deniers.
Defensibility and wanting-to-retain-beliefs both seem likely, but seem to me to be different things. Also, a third thing (or perhaps a variant of wanting-to-retain-beliefs) that I think is often involved is wanting-not-to-be-fooled: someone believing that, unconstrained by empirical evidence, a seemingly plausible argument can be constructed for anything, so they’d better not even start to consider any argument for a contrarian position (especially one that favors unusual actions) lest they be exploited either by deliberate trickery or by a parasitic meme.
A good way to turn this question around when discussing with an intelligent person is to ask this question: “What scientific evidence could be announced next year that would push you to change your position?”
This forces them to consider, at least, the source of their disagreement.
Not coincidentally, it’s a question that we should frequently ask ourselves as well.
Part of my problem with arguing about AGW is that it has gotten to the point that it’s not a science question, it’s a political question at this point. So I can be reasonably sure that any “scientific evidence” that will be announced will come from one faction or another, and will have been carefully vetted by the policy board to ensure that it hews to the party line. (Whichever party it comes from. All sides are equally to blame as far as I can tell.)
In this kind of environment, it’s hard to take any evidence at face value. Both (all) sides accuse the others of double-counting evidence, hiding unflattering data points, and shading results and simulations.
The only thing that makes any sense in this context is to compare historical projections to the world. Some AGW proponents seem to have over-predicted doom, so I heavily discount doom projections. It’s not obvious that the worldwide climate is warmer than the (very) long term trend would indicate. There seems to be obfuscation about polar melting. It seems obvious that sea levels rising is miniscule to date. Climate change doesn’t make any specific predictions AFAICT that have been upheld.
There are probably other rules of thumb that ought to be useful in this context, but that’s all that comes to mind at the moment.
On the issue of “Have you ever seen an Ape evolving into a Human?” and the requested video tape (We get that too at RDF), I have found the following to be very helpful in showing just how stupid the claim is. Simply ask the person:
“How do you know that your father is really your father? Do you see him have sex with your mother to conceive you? How do you know that she did not have sex with someone else? Do you have a video tape to prove this?”
They of course, will have to admit that they take it as given based upon the testimony of their parents.
But, no creationist is going to let a thing like reality or evidence stand in their way. In the words of more than a few Creationists, such as the founder of the Creation Museum: If reality and scripture contradict, reality is wrong and scripture is right (paraphrased, as this has been stated in more ways than I could possibly recount here)
Your challenge will qualify until you find the one person whose parents have taped every time they had sex and can provide the dates on said tapes. You may want to watch out, because now you never know when you might start having to provide evolution tapes :P
Of course… that would imply the person had watched their parents sex tapes, so you’re probably still very safe.
You still can’t know that they taped themselves every time they had sex. Now can you know that either of them might have had sex with someone else that wasn’t taped.
The basic point here is a good one, and it’s obviously right as it applies to evolution and very likely to AGW as well, though I know very little about that and rely entirely on the fact of the scientific consensus in forming my opinion. But at the same time it is important to keep in mind that just because someone has worked hard and offered you the best evidence that they could be reasonably expected to muster under current circumstances, that doesn’t necessarily mean that they have come anywhere near proving the case.
Of course. But it is logically rude to demand some knowably unobtainable even-if-they’re-right proof instead, and then toss all the other arguments out the window.
Agreed
Probabilities can’t be ignored, of course, but nobody ever actually has the correct probabilities, except for in mathematics and experimental science. (Although in mathematics, you might say the correctness is only platonic.) When you say “probabilities” in this article, IMO, you are rather blase about interchanging the notions of these objective, accurate entities, and these nonobjective entities that get mixed up alongside of them.
The whole point of making an objective claim should be to attempt to provide evidence for why your probabilities are accurate. Thus the initial quote you mentioned about the burden of evidence is actually completely crucial and well justified, in my mind. In a situation where you can’t establish this accuracy because evidence is lacking, there’s really no other resort except to be skeptical, or to rely on the opinion of experts who are better able to interpret more difficult evidence.
The problem with the evolution example is that the theistic person isn’t very well characterized by their skepticism. On the contrary, I would say they are more characterized by their willingness to believe, and they are a reverse-skeptic, who is unswayed by evidence but also unswayed by lack of evidence. I see much more of a problem with this kind of attitude than with the attitude of “over-skepticism” (to crudely paraphrase) that you’ve alluded to here.
Probability is a strength of belief. Even if you are not able to calculate that strength precisely, that strength should respond to evidence in a certain way, and there is a rational probability to assign a proposition given the evidence observed.
The problem with the theist is not being consistently over or under skeptical, but being more skeptical about propositions they don’t want to believe, so they are over-skeptical of evolution in the way that Eliezer describes.
Oh… I also thought that I would throw this into the mix.
When a creationist or evolution-denier says that “No one has ever seen an ape evolving into a man, or a dinosaur evolving into a bird.” Often, what they mean is that an Ape literally turned into a man while it was alive. The more subtle creationist will just imply that a thing that was fully ape gave birth to a thing that was fully man, yet I have discovered that both types are to be considered about equally likely to be encountered.
Neither type of Creationist or evolution-denier seems to understand that were these things to occur, both would disprove the Theory of Evolution...
Also, for anyone who wishes to know how far some of these people go… William Dembski, who runs a “University” that teaches “Creation Science” has, as part of one of his classes, an assignment whereby they get credit for making posts critical of evolution and in support of creation on what they term “Hostile Internet Forums”. PZ Myers has taken to deleting any posts and banning any members who are discovered to be part of these classes. Richard Dawkins’ forums have yet to devise a strategy against this sort of thing...
Eliezer:
Don’t you realize that I have work to do and a personal life to engage in without you posting things that I must obviously drop everything and read and think about like the Bostrom paper. Have a heart, man. Have a heart.
the science of predicting what exactly is going to happen to the climate is pretty immaterial. humans are going to burn the vast majority of remaining oil reserves. base your plans on that rather than devoting resources to the losing battle of preventing other people from exploiting cheap energy.
“Human beings and chimpanzees have 95% shared genetic material. It’s over.” Where does this number come from? I’ve heard people saying the DNA was 98% the same since well before the human genome was sequenced, and the chimpanzee genome isn’t completely sequenced yet. Where does this number come from?
The first google hit for “human chimpanzee DNA” is from Answers in Genesis, but the second might be more useful:
http://news.nationalgeographic.com/news/2005/08/0831_050831_chimp_genes.html
I think the key is that most people don’t care whether or not AGW is occurring unless they can expect it to affect them. Since changing policy will negatively affect them immediately via increased taxes, decreased manufacture, etc., it’s easier to just say they don’t believe in AGW period. If the key counter-AGW measure on the table were funding for carbon-capture research, I think many fewer people would claim that they didn’t believe in AGW.
My take on global warming is that no policy that has significant impact on the problem will be implemented until the frequency of droughts/hurricaines/floods/fires increases to obvious levels in the western world (fuck-Bengali policy is already in place, and I don’t think more famines will change that). And by obvious, I mean obvious to a layman, as in ‘when I was young we only had 1 hurricane per year, and now we have 10!’ By this time, the only option will probably be technological.
And by ‘them’, they don’t necessarily even mean ‘future them’. They mean ‘the status of them in the relatively near future’.
I agree.
I realize that this is not a debate about global warming, but respectfully, you are wrong here. It’s just that the privileged hypothesis is hidden from view by means of conjunction.
It may surprise you, but the actual global warming hypothesis as pushed by the likes of the IPCC is NOT simply that increased levels of CO2 will result in an increase in global surface temperatures.
The actual hypothesis is that increased CO2 levels will cause an increase in global surface temperatures, which will cause an increase in levels of water vapor in the atmosphere, which will cause temperatures to rise further, and so on, until there has been a dangerous increase in global surface temperatures.
In other words, global warming is a compound hypothesis. And the second part of the hypothesis—water vapor feedback—is very much like the invisible unicorn in your garage. There is simply no a priori reason to believe that the climate operates by positive feedback in this way.
I go into more detail about this on my blog.
http://brazil84.wordpress.com
Anyway, I realize this post is a bit off-topic, but I think the point is important. When discussing a claim, it’s helpful to make sure everyone is discussing the same claim.
You’ve probably looked at this issue more than I have. But honestly, skimming your blog has set off so many of my rationalist alarm bells that I doubt spending time there would be a productive. If it hadn’t been linked from here I would have ignored it.
-Adding the suffix “-ist” to words to describe the position of the scientific establishment. You create at least two new words this way and then use those words constantly. It makes you sound like a crazy person.
-Inventing rules to control and limit discussion and banning those that break these rules.
-Refusing to concede evidence to the other side
That plus a few of your other positions gives me pretty good reason to hold off investing time in a discussion until I’ve been assured of your reasonableness.
What word would you suggest I use to describe those who subscribe to the CAGW Hypothesis? And are you claiming that the “scientific establishment” subscribes to the CAGW Hypothesis?
I think this depends on how fair the rules are. Pretty much every discussion board, including this one, has rules and bans people who break those rules. Do you think my rules are unfair?
What evidence did I refuse to concede?
Suit yourself.
How about “those who subscribe to CAGW”? Certainly referring to them as alarmists begs the question. In general, the suffix “-ist” suggests an ideologue who can’t be reasoned with (there are exceptions, such as philosophical positions, but in political discussions this is almost always the case). Whether or not those who hold the view you disagree with can in fact be reasoned with is irrelevant—this coinage amounts to ad hominem by connotation.
I think the scientific establishment drops the ‘C’ (or at least doesn’t hold the extremely terrifying beliefs a lot of non-scientists hold about global warming) but since you’ve coined new terms at few points in your blog do I know who you’re actually critisizing.
You don’t have a discussion board, you have a personal blog. The rules here are mostly informal and they’re designed to ensure quality of content and civility. We have a few rules about subject area but they are flexible and are only needed at all because they’re often noisy for the large number of people that come here. Your rules limit discussion to one, very particular thesis. Which is fine if you’ve got a ton of readers and you’re trying to sort signal from noise. We ban people who constanly post New Agey nonsense because no one wants to be distracted by that stuff. You don’t have the readership to do that. You’ve had one dissenting commenter afaict. He was banned.
Your other rules: If someone here uses a straw man someone else replies “Hi. This is a straw man.” On your blog, you promise to ban them. If someone equivocates here another person will reply “I think you’re equivocating on this point.” some discussion will then ensue about whether or not that person is in fact equivocating. On your blog however, if someone equivocates or is otherwise “weaslely” they must admit they are weasling or they will be banned. Here if person A criticizes person B’s spelling person B will usually edit their comment and reply “Thanks.” On your blog, if you are person B you will assume that person A has conceded your argument.
This only begins to describe your list of 9(?) rules. I can’t see how they’re actually implemented because it looks like you deleted the violating comments. They aren’t so much unfair as preposterous. In the words of the one person who commented to the post listing your rules: “Dude, seriously, chill.”
Put it this way: you argue like an attorney not a scientist. Is anyone not clear what I mean by that?
Look, presumably you want smart people who disagree with you to challenge your beliefs. I’m giving you strong reasons why they might be avoiding you. Do with that information what you wish.
This is all very well said. The site is clearly an attempt to argue one position on AGW, rather than to weigh the evidence that comes in. More than that, all evidence to the contrary is held to be deeply stupid and/or dishonest. The result is …. I don’t quite know how to put it. But the result is disturbing. It feels like one has stumbled into a strange single-person cult.
I find that a bit cumbersome. I try to use the word “warmist,” which I think is reasonable. Feel free to disagree. ETA: I will try to stop using the word “alarmist.”
Well in that case, “warmist” in fact does not describe the views of the scientific establishment.
That’s simply incorrect. Numerous comments either contradict or question what I have written.
I haven’t deleted anyone’s comments at all. As best I can recall, only one person was banned after a warning.
Yes, I’m a bit confused. It seems to me that whether you are an attorney, a scientist, or anything else, if you claim that I refused to concede some evidence, you should be prepared to either back up your claim with specifics or admit you cannot do so.
So please back up what you are saying. Please show me where I was”refusing to concede evidence” (whatever that means).
Not necessarily. A shockingly high percentage of smart people resort to the sort of tactics which I disdain. For example, I’ve seen it happen numerous times that people strawman me. It’s a complete waste of time to argue with somebody who isn’t even arguing against my actual position.
CAGW endorsers? I think neologisms basically suck. But, alright, fair enough.
I count 22 total comments. Half were made by you and another one was Word Press’s sample comment. And a solid percentage of the others agreed with you. ‘Numerous’ seems like too strong a word unless I am missing part of your blog. It does look like there was one other instance of disagreement than I saw the first time. Apologies for hyperbole in that case.
I assumed the person banned in this thread had comments deleted because you responded to all his points and then posted two more times before you banned him. I assumed he had responded to you and done something more heinous to be banned. This doesn’t make the banning better, it makes it worse. I don’t know what to tell you. The strict rules and the banning look ridiculous for a blog of that size.
My claim was poorly phrased. What I mean is that I would expect these sorts of questions to have some evidence on either side. It is highly likely the majority position especially, has at least some evidence in its favor. Someone who is honestly trying to figure out the science will look at the majority position and say “oh, these are fair arguments but here are some considerations that would make us doubt them” or “here is why these arguments look convincing but aren’t”. Even with theists we can say “Yeah I can see how a designer looks like a good explanation for the natural world. But here is this other, better mechanism that explains it all (evolution) and it turns out that positing a designer just pushes the question back a step.”
Now admittedly it is possible the endorsers of CAGW really have nothing resembling a convincing argument. And you’re certainly not obligated to pretend they do. But my problem isn’t just that you haven’t conceded that your opponent might make plausible points. I’m afraid it is more general and more vague. Reading your blog is a lot like reading one of the sites giving evidence for either side in the Kercher murder. One does not get the sense that you’re interested in truth. One gets the sense that you’ve made up your mind and are interested mostly in beating up the other side and winning the political battle. Your arguments are dressed like soldiers. I have no idea what your actual motivations are, of course. But this is the sense I get from reading the blog.
So you don’t want all smart people to challenge your beliefs. Presumably, though, you still want smart people to challenge your beliefs.
If you look more carefully, you will see I asked him a couple reasonable questions; he did not respond; and that was that.
I still don’t understand what the problem is. Do you think I have ignored or misrepresented the best evidence in favor of the warmist position?
Sure, if they do so in a reasonable fashion.
You asked him a couple reasonable questions. He did not respond. A few days later, you banned him.
I have no idea if you have done done that. In the same way, If I had just read the “Amanda Knox is guilty” website I would have no idea if they had responded to the best arguments of the “Amanda Knox is innocent” crowd. But as with those websites your tone and form do not give me confidence that you have in fact done so. Maybe someone else can point out exactly what gives me that impression, I’m afraid I’m at a loss. Sorry.
His discussion strategy consists almost entirely of logical rudeness. I had assumed he was an attorney based on my prior experiences with the style long before he presented his qualification as evidence. My prejudices inform me that while they can often be quite competent at seeking out truth, speaking to lawyers is a terrible strategy for finding truth yourself.
Right. That’s what I meant when I said “that was that.” He was finito. Kaput. If people do not respond to reasonable questions I ask, I am not interested in engaging with them.
Well, if there is some important piece of evidence I am ignoring, somebody should post it. Then you can see how I respond and evaluate my tone.
Maybe he had better things to do than hang out on your web site on your timetable?
Maybe so . . . . but so what? It’s not like I’m saying he’s a bad person.
[banning for dissenting in a reasonable fashion, etc]
One ‘so what?’ is that I think you could sincerely assert six mutually contradictory things before breakfast. The concept of ‘bad person’ involves philosophy I’ve never really sunk my teeth into. Virtue ethics I think they call it. But simplified deontology labels that a bad behavior and my preferred consequentialist model assigns all sorts of negatives to the expected utility thereabouts.
I have no idea what your point is. I asked that person a reasonable question; he did not answer; so I do not feel like engaging with him any further. It’s as simple as that.
I could; you could; anyone could. Again, so what?
We have been there already. In this case my point is that you can reasonably claim “I am willing to talk to dissenters” EXCLUSIVE-OR “me blocking dissenters is not relevant”. I don’t really care which you do but you’re doing both. That’s an AND not an XOR.
My more specific point is that this behavior is highly undesirable to me and I want to discourage it.
I don’t believe I could. It is the sincere part that is hard for me. Sincerity is a hard skill to master, at least at the higher levels of contradiction.
That’s not my claim. I am willing to talk to dissenters, but only certain kinds. For example if the dissenter wastes my time by insisting on mischaracterizing my position, I am no longer willing to talk to them.
If the dissenter refuses to answer reasonable questions about his position, I am no longer willing to talk to them.
And so on.
Everyone makes mistakes now and then.
Anyway, I think I understand at least part of your point now: You are accusing me of being a liar (or of lying to myself habitually). Is that it?
This thread is degenerating rapidly. Downvoting from after this comment down.
So am I. You are free with the block command and so I wouldn’t be particularly reluctant to use it on you. I honestly prefer overt trolls to your ‘kind’. That’s just my quirk. I prefer things out in the open.
No you’re just trying to make me sound bad and claim the moral high ground. Of course, what I actually said is probably a greater slight coming from me. I claim that I am a liar when I say six contradictory things but you could say them sincerely and the concept of ‘lie’ is way off in the background, a discarded child’s toy.
Please stop with the personal comments.
Also, are you claiming that I admitted to refusing to engage with people simply because they disagree with me? Simple yes or no question.
I claim that blocking behavior in response to dissent has clear relevance to your willingness to have smart dissenters challenge your beliefs and does, at a minimum, invalidate the rhetorical implication of the question ‘so what?’.
I replied to this comment only to give myself practice at avoiding this trap. Questions stop being simple ‘yes or no’ propositions when you know that they will be glued together in a way that does not follow. Respond to the frame, not the image.
As usual, I don’t understand what your point is, except it seems you have evaded my question.
I don’t understand your point here either, except it seems you are trying to insult me in a roundabout way by accusing me of some kind of dishonesty.
Indeed, it seems your comments towards me are more informed by personal animus than any desire to actually discuss or debate anything. It seems to me you are still annoyed that I pointed out a contradiction in your argument a few threads back.
In any event, I generally don’t engage with people who are consistently incoherent or with people who consistently insult me. It’s just a waste of my time. If anyone else wants to explain what Wedifred’s point is in a polite manner, I’m happy to listen. But as for Wedifred, I’m not engaging with him anymore.
Bye.
Not too complicated for a reader to understand.
I should clarify that obfuscation qualifies as ‘ignoring’.
Well, if you think there is some important piece of evidence I am obfuscating, please feel free to describe it.
What kinds of possible evidence would you expect to see if such positive feedbacks can happen?
To paraphrase Eliezer, ceteris paribus and without anything unknown at work, water vapor is a greenhouse gas and ought to make the Earth hotter. Also, ceteris paribus and without anything unknown at work, a hotter Earth ought to lead to more water vapor in the air. There is your ceteris-paribus-and-without-anything-unknown-at-work feedback loop.
Not cast iron proof. Maybe not yet enough to justify expensive counter-measures. But it is where the weight of the evidence sits before you start asking for impossible proof.
I’m not sure what you mean by “can happen,” since in some sense lots of things “can happen.”
Anyway, it’s not a full answer to your question, but the gold standard for substantiating the water vapor feedback hypothesis would be if the proponents of that hypothesis made specific interesting and accurate predictions about future events.
I disagree, and perhaps an anlogy would help: All things being equal, cooler weather can be expected to lead to more snow cover. And all things being equal, more snow cover can be expected to result in cooler surface temperatures because of effects on the Earth’s albedo. So should we worry that the next big volcano will trigger an ice age?
The answer is “no,” and I think the mistake here is two-fold. First, rough reasoning gets exponentially rougher as you travel along a chain of deduction. Second, we can’t ignore the fact that the Earth’s climate is a complicated system which has been around for a long time. The normal assumption should be that if you push on such a system, then it will probably push back at you.
No, that is evidence of authority. The gold standard would be if the assumptions that led to the hypothesis also led to specific interesting and accurate predictions about future events (and don’t lead to inaccurate predictions).
I agree . . . as a practical matter there might not be much difference, but I agree.
But when will it push back at you? Before or after it has triggered a mass extinction event?
There is evidence that there have been multiple mass extinction events in the planet’s history, some of which may have been caused by the earth getting too hot or too cold.
Could you give me an example or two of such mass extinction events which may have been caused by temperature changes? I would like to think about your point in context.
A system can have a balance between positive and negative feeback. If it has a mix of both, there’s amplification, not necessarily a runaway. (The balance between solar input and radiation to space, among other things provides negative feedback)
Moreover, it isn’t even just a multiplication problem. There are different styles of feedback—proportional, integral, differential—and those latter two can come with different time scales
It’s obvious that pushing the same direction for a hundred years can be much bigger a deal than pushing a hundred times as hard in the same direction for a day, but it’s also true of a hundred-times-as-strong push lasting for, say, two years. Or, depending on the different feedbacks, the hundred times as hard for a day could have a bigger effect.
And all of that is without going nonlinear!
I’m not sure of that. If negative feedback dominates and overwhelms any positive feedback, then how would you get amplification?
Anyway, the burden is on the proponents of CAGW to demonstrate amplification. So far they have not done so.
Sorry for the ambiguity. I should have reflected your wording more closely and written “What kinds of possible evidence would you expect to see if the climate operates by positive feedback in this way?” Part of the purpose of the question was to determine what you meant when you chose that wording.
Since the effects are alleged to take place over decades, asking to see this evidence now is asking for impossible evidence.
A priori, yes we should. However, we would be justified in decreasing our concern if either (1) additional theoretical consideration show that, in fact, according to our best theory, that loop probably wouldn’t occur, or (2) despite our best theory, we’ve observed many big volcanoes erupt without setting off such loops.
Let us suppose that (2) is the case. Then this would decrease our confidence in our best climatological theory. However, if that same theory asserts that X will probably cause Y, where X is not very similar to something that we’ve observed in the past (so, not a big volcano eruption), then our best bet is still that Y will follow X, even though our theory blew it on the consequences of the volcano eruption. Our confidence in Y will go down, but it will exceed our confidence in ~Y. (Otherwise, the theory wouldn’t be our “best”.)
(To the best of my knowledge, our theories don’t mispredict the consequences of volcanoes, though, for all I know, that could be only because volcanoes were part of the input data used in the theories’ construction.)
This sounds like you want to construct a climate theory by taking an a priori first-principles theory and adding an ad hoc “push back” mechanism, according to which the current equilibrium is assumed to be more stable than the first principles would justify. It’s fine to believe in such a mechanism, even if you can’t justify it from first principles, provided that you have direct empirical evidence for it. In which case, great, add that evidence to the pile of all the other evidence that we use to justify beliefs about the climate, and let’s see how it all adds up.
I’m not sure what you mean by “decades,” since the warmists have had well over 20 years now. Anyway, the warming which took place during the 1990s was alleged to have been the result of CO2 emissions, agreed? And do you agree that some of these computer climate simulations have been used to make shorter-term predictions?
Well, do you agree that there many different possible feedback loops one could postulate?
I’m not sure whether you would classify it as a first principle or as empirical evidence . . . it’s just common sense.
Scientific research citations, please. The ones I know of go the other way.
What exactly is the claim I made for which you are requesting a citation? Let’s make sure we are on the same page here.
Also, if you just want to debate global warming as opposed to rationalism in general, I would ask that you visit my blog.
I was asking for citations suggesting that water vapor feedback doesn’t happen. I’ll grant that the argument is off-topic, though.
I’m not trying to get cute, but please re-read my post. I did not claim that water vapor feedback does not happen. (Obviously that’s an important question, and I invite you to discuss it with me on my blog.)
I apologize—I assumed your claim was that an increase of CO2 sufficient to directly cause a 1°C rise (about a doubling, is what I’ve heard) would make no more than 1°C rise. I objected because my current understanding is that the water vapor increases that to about 3°C rise.
If we have no disagreement on that point, we have no disagreement on anything that has been said denotatively so far. And, as we can both agree, any further remarks would be severely off-topic.
FWIW I do disagree with you on that point. But it was a different point from the one I was making.
I address the sensitivity issue in large part here:
http://brazil84.wordpress.com/2008/09/12/40a-simulations/
How about we say ‘even under conditions of uncertainty’. Decision theory handles decisions under certainty too. See, for example, the majority of decision theory conversations around here. (It’s just simpler discussing the certainty cases unless the uncertainty plays a specific part in the specific case.)
Re: “if there’s something we can do about AGW, we need to do it now, not in a hundred years”
Sure we can do things about AGW—but why would we want to? Why exactly is a warmer planet bad? Since we are currently in an intergalcial in an ice age, a warmer planet would surely be a big win for the biosphere.
Well, a lot of bad things might happen.
A lot of good things might happen too—avoiding reglaciation, greening the world’s deserts and taking the planet out of the freezer being among them.
One cannot just look at the negative things—one must keep a balance sheet, and see how the positive and negative aspects add up.
Trying to stay where we are appears to be a particularly stupid and risky option—due to the likelihood of catastrophic reglaciation. To have a reasonable safety margin, we must try to heat the planet up. IMO, the issue is not whether to do it, but rather how much—and how fast.
Yeah but this really isn’t a difficult calculation. Yeah, you might green a desert or two to make up for some of the farm land that gets ruined but you’ll have no infrastructure set up to take advantage of it. It will cost you hundreds of billions of dollars while people starve. Human civilization has adapted to the planet in a particular way and there will always be high costs associated with rapid change to the planet that humans can’t easily adjust to. If we had built our cities knowing that the climate was going to change rapidly that would be one thing, the costs would be minimized.
Also, I think we can hold off worrying about the next glacial period until we’re considerably more than 12,000 years in to it.
Re: “Also, I think we can hold off worrying about the next glacial period until we’re considerably more than 12,000 years in to it.”
You are kidding, right? Interglacials don’t last for long. The next glaicial period is probably overdue:
http://www.fcpp.org/images/publications/ME036%20Graph%201.jpg
...once the planet gets into ice-driven positive feedback cycle that reglaciation represents, stopping it may prove challenging.
Maybe we’re thinking at different scopes. Each unit of that graph represents 10,000 years. Messing with the climate now risks expensive disasters that stall economic and technological development over the next 500 years. Unless you think a glacier is going to be crushing New York City between now and then the best thing to do is develop as much as possible and learn things until you can confidently hack the climate.
Global warming is benign, though. The changes are generally positive. The idea of warming causing an expensive disaster that stalls economic and technological development is a fearmongering fantasy—and is not supported by science. The faster warming happens, the more quickly the Earth’s carrying capacity will go up, the more food we will be able to grow, the faster the deserts, arid regions and icy-wastelands will vanish, and the more minds and resources we will have to dedicate to our real problems.
Those who want to stop warming appear to have identified technological development as the cause of the problem in the first place—and seem to be doing what they can to sabotage development—by restricting the access to resources by businesses—thereby attempting to cut off their air supply. My assessment is that such behaviour is likely to have a destructive effect that increases the planet’s risk of reglaciation.
In the long run, they may be positive. In the short run, melting the Greenland and Antarctic ice sheets means that most of Manhattan Island, most of Florida, and plenty of other very valuable developed land will end up underwater. The cost of relocating the inhabitants and rebuilding the infrastructure would be enormous, easily reaching into the trillions of dollars. Might as well drop a hydrogen bomb on New York City!
ETA: The “hydrogen bomb” comment was stupid and gratuitous. I blame sleep deprivation.
Well thats of course not right. The primary loss in dropping an H-bomb on NYC is the loss of human life—both in a moral and an economic sense.
Here is a point to consider. Over the last 100 years the population of the earth has increased by 5 billion. We have created new places for all of those people to live and work. And that was done with a population much smaller than we have today. Over the next 100 years we may add 3 billion more and we will need place for those people to live and work.
Its not immediately clear that the costs of building all of this in a new location is that huge relatively speaking.
That would be a special sort of hydrogen bomb that expands by 3mm per year, I presume.
Okay, bad metaphor.
::sigh::
That’s what I get for commenting when sleep deprived. :(
Ok, icy wastelands I can see. But the deserts and arid regions? Our deserts here in Australia seem to have more than enough heat already. And the most fertile land is that which is right near the coast, ready to be covered in salty water as the ice melts. Then all we would have left is desert.
According to Jared Diamond’s book Collapse, Australia’s biggest agriculture problem is a lack of good topsoil; you really can’t farm it very well because if you don’t return the nutrients from the plants to the soil by not harvesting, you end up unable to grow much of anything at all a year later.
Deserts are mostly an ice-age phenomenon. The positive effects of increased evaporation and precipitation eventually dominate as temperatures rise. Check with the humidity rises in northern Australia to see the effect—or see:
“Sahara desert goes green, thanks to warming”
http://timesofindia.indiatimes.com/home/environment/global-warming/Sahara-desert-goes-green-thanks-to-warming/articleshow/4849759.cms
Increased precipitation may also mean more hurricanes and other destructive storms. :(
Regardless of whether the ultimate effects of global warming are a net positive or negative, there are likely to be costly disruptions, as areas currently good for agriculture and/or habitation cease to be good for them, even if they’re replaced by other areas.
Exactly.
I’m sure we can both produce a long list of positive and negative effects of global warming. Picking out items from the “negative” list does not constitute much of an argument—you have to look at the big picture.
Greenland and Antarctica have enormous inertia. Ice takes a long time to melt—and antarctic ice is an average of 2 kilometres thick—it will probably take tens of thousands of years to melt it. So change is unlikely to be particularly rapid.
I am not advocating particulaly rapid change. Extended change may well be even more inconvenient, of course. It is quite possible that we should try and get climate change over with as soon as possible—to avoid lengthy disruptive changes.
A warmer planet will have more and better farming opportunities, and will sustainably support more people. It is the arid ice-age climate with its deserts and permafrost that is hostile to living systems. Today we have to construct greenhouses artificially to grow plants for food. If we can just end this horrifying ice ige, the whole planet will become our greenhouse.
There is no good reason to think that. The last few interglaicials were only around 10,000 years long. The end of this one may well be overdue.
You’re talking about things that a civilization considerably more advanced than ours should strongly consider. But we don’t even know how to heat the planet without nasty externalities. Right now human civilization is in the “don’t fuck it up” stage. You don’t go messing with the climate until you know what you’re doing or you have to take the chance just to survive.
No. The important thing is to get away from the cliff edge that represents reglaciation. That is the catastrophe which we most urgently need to avoid. Staying near to the edge of the “reglaciation” cliff is a really bad option for humanity and the rest of the planet. That way, potentially billions may die in a reglaciation catastrophe. Safety considerations are one of the main reasons for wanting to further warm the planet up.
We should not hang around on the edge of the “reglaciation” cliff, waiting for technology to develop. Nor should we engage in ridiculous schemes intended to cool the planet down. We should just walk away from the cliff—and probably go as quickly as conveniently possible before the ground crumbles beneath our feet. The longer we dilly-dally around, the bigger our chances of going over the edge.
This does not seem very complicated to me. Reglaciation looms as a clear and present danger. We must do our very best to go in the opposite direction. We can debate how fast we can safely run, how far away is a safe distance, etc—but run we absoultely must.
The Milankovic forcing is small. Even in the unforced case we would probably miss the next trigger and have 50 Ka of peace and quiet. Now we’re well past the threshhold. Find something else to worry about, please, like ocean acidification, coastal flooding, rapid regional climate shifts, and ecosystem disruption for instance.
You are assuming that Milankovitch cycles are the cause of the problem?
That is debated—due to things like:
http://en.wikipedia.org/wiki/100,000-year_problem
...and the list here:
http://en.wikipedia.org/wiki/Milankovitch_cycles#Problems
See also some of the alternative hypotheses:
“Sun’s fickle heart may leave us cold”
http://www.newscientist.com/article/mg19325884.500-suns-fickle-heart-may-leave-us-cold.html
...and...
“A New Theory of Glacial Cycles”
http://muller.lbl.gov/pages/glacialmain.htm
Unreferenced claims that “we are well past the threshold” don’t count as particularly useful evidence.
I recommend you back up such material if you want to continue this discussion.
If it becomes an imminent threat, reglaciation may be easier to avert than warming. Right now, we know more about how to heat the planet than how to cool it off.
Reglaciation is an imminent threat—and we don’t know if we would be able to stop it.
A lot of the misguided research on mitigating global warming has investigated how to cool the planet down. I know of no research effort on a similar scale devoted to heating the planet up. So, I am not clear about where the idea that we know more about how to heat the planet than we do about how to cool it is coming from.
Well, it’s fairly well-known that putting a lot of greenhouse gases will warm up the planet. ;)
Sure—and there’s also black carbon:
http://www.time.com/time/health/article/0,8599,1938379,00.html
...and planting trees in the north:
http://www.scientificamerican.com/article.cfm?id=tropical-forests-cool-earth
Hopefully in due course we will have fusion and mirrors in space on our side as well.
I don’t think anyone knows if a concerted effort could prevent reglaciation, though. If anyone wants to make the case that we should downplay the risk of reglaciation because we could avert it, I would say: prove it. This looks potentially extremely dangerous to the planet to me: show me that it is not.
Until we are much more confident in our climate control abilities, I think a safe distance is prudent. IMO, that involves at least melting Greenland.
The planet? The planet is used to glaciers. It’s the humans who may not like them.
I mostly mean the planet’s lifeforms. Few living things like ice crystals. They typically rupture cell walls—causing rapid death.
You make a valid point, but you neglect to mention that the same temperature/pressure regimes that generate ice crystals also make metals (especially scrap metal alloys) very brittle and prone to cracking, not to mention long-term effects on malleability.
Kind of a big thing to leave off!
You have a point there. If you want to build something out of metal and not have it break—and there are lots of important things that can be made out of metal—a cold environment makes it harder.
That was what we thought ten years ago. There has been considerable and surprising progress on ice sheet dynamics. Basically, ice sheets do not melt from the top. They crack, fail mechanically, and slip into the sea. This is especially true of those whose base is below sea level, specifically the West Antarctic Ice sheet (WAIS).
14 Ka ago sea level rose by several meters per century for several centuries. The mechanism was the partial failure of the WAIS. There’s still some left.
Don’t get me wrong; this will not happen next week, and there will be no resulting tsunami. But a meter of sea level rise in this century is likely, two is plausible, and four isn’t totally excluded.
You seem fond of don’t-worry arguments. This makes you an instance of Eliezer’s point.
You sound as though you are arguing with something in my post—but it is not clear what—since you don’t really present much of a counter-argument. Greenland and Antarctica really do have enormous thermal inertia. Ice really does take a long time to melt—and Antarctic ice really is an average of 2 kilometres thick.
You are arguing with the “it will probably take tens of thousands of years to melt it”? Consider that a ballpark figure. Currently the Antarctic ice sheet is getting thicker and thicker—and it is −37 degrees C down around the pole. So: it is not going anywhere anytime soon.
Perhaps paranoia has its place—but I think it is best recognised as such.
Those who make calls to action often distort the picture—to make their cause seem more urgent.
So: such causes become surrounded with distortions and misinformation designed to manipulate others.
We don’t need to melt them to raise the sea level. All that ice floating around does just as well.
We have had around 1.7 mm per year for the 20th century.
That seems pretty slow to me.
It is true that the record—at the peak of the last glacial retreat—was some 65mm / year—but there was a lot more ice all over Russia and Canada back then—and we are unlikely to see anything like that with today’s much-smaller ice caps.
I don’t actually consider the parent to be a particularly bad (ie. −2) contribution. All else being equal the observation is extremely important. From my hazy recollection global warming, via mechanisms which I don’t particularly recall, may actually hasten the return of the cold. But I’d like the question answered by someone who actually knows the science.
Usually taking steps away from a cliff edge reduces the risk of going over. Maybe not if you are attached by a bungee to a point near the edge—but I don’t see much evidence for that.
I do not know if it is what you were thinking of—but probably the most well known claim that warming will lead to cooling is this one:
“What about the idea that global warming could slow the north atlantic current, cooling northern Europe, thereby precipitating a new ice age? This idea is hogwash—it never made any scientific sense in the first place, and has now been widely discredited.”
http://timtyler.org/end_the_ice_age/
Jumping around on a pogo stick near the edge of a cliff increases the risk of going over, even if you’re trying to hop away from the edge. I am not comfortable with making any particular predictions with regard to the medium term consequences of our intervention.
Would you go so far as to say ‘we should release more greenhouse gasses so we don’t all freeze’?
It’s pretty-much: “less-ice:good, more-ice:bad”.
I am not certain where warming resources would be best placed. One of the options I have looked at involves planting trees at high lattitudes:
http://timtyler.org/tundra_reclamation/
Spreading black ground sheets on ice-sheets is another idea which I discuss on that page.
Restricting greenhouse gas emissions appears unnecessary—but AFAIK deliberately accelerating their production has not been seriously proposed—and I am not sure it makes sense.
How convenient—a school of thought that holds that we should keep doing exactly as we are now. I expect a high portion of that school’s adherents are examples of motivated cognition—that their search for evidence and explanations went about as far as needed for that outcome.
Somehow you managed to write this as a reply to a comment where I discussed making some active interventions.
Yes, I replied only to the final fragment of your overall comment (that pertaining to emissions). Sorry for the confusion. I understood that to be a report by you of others’ tendencies to not suggest increasing emissions; I was criticizing those others. My accusation was that it’s suspicious to support exactly the present levels on the basis of a putative overall warming-is-good project.
It may also well be that it’s too expensive to spew any more CO2 into the air than we already do, but it’s still odd if the possibility hasn’t been suggested.
FWIW, I don’t know how fast the planet should be warmed. Maybe we are going too fast—or maybe we are going too slowly.
I suspect that we are going too slowly—on the grounds that reglaciation is a clear catastrophe, while the effects of warming are mostly fluff, and the inertia of the huge ice caps slows warming to an intolerable crawl.
Anyway, I think it is unlikely that we are warming at the “right” speed—for much the same reasons that a randomly chosen number from 1 to 100 is unlikely to be 10.
FWIW, trees at high latitudes and blackening ice are also rarely suggested. I expect there has been research into the effects of increased carbon emissions—but it is not an trivial thing to search for, due to noise.
If anyone knows of research relating to what the “best” gas to pump into the atmosphere to produce warming would be, feel free to speak up.
One obvious problem with pumping out more carbon is that this can have side effects distinct from warming. People will protest about the heavy metals that are dug up with it as well being put into the atmosphere, for instance. It is not clear that additional greenhouse gases are the best way of further warming the planet.
I’ve had a lot of internal battles over what to make of the AGW debate, and finally decided that if I’m willing to trust in science on evolution vs. intelligent design, I have to trust in science on AGW vs. nuh uh! as well.
After further reflection, I think my skepticism of AGW was motivated by a disdain for the people who use it as a vehicle for policy—“How convenient for you that there’s a huge global problem that calls for exactly the same policies you were calling for earlier!”
I still think that policies designed to mitigate AGW (ethanol, handouts to “green” companies, handouts to well-connected polluters with restrictions on less-well-connected-ones) might be worse than the effects of AGW. But that’s not a legitimate reason for arguing against AGW.
I personally distinguish between branches of science and the amount of trust I give them. The simpler the things they are talking about and the more successful the field is at making predictions the more weight I give it.
Climate science has a number of complex feed back loops such as increasing plant growth and is regularly changing its predictions as it incorporates more details into the simulation (although they are all trending positive temperature increase), so that I don’t assign it the same weight as the predictions of a particle physicist on particles in standard conditions.
I think more relevant is that a lot of the policies “designed to mitigate AGW” aren’t designed to mitigate AGW—they are, as you mentioned, policies which people were calling for already. The real policies to mitigate AGW are things like carbon taxes, which no-one would have proposed if AGW wasn’t being considered.
For the record, I’ve read that cap-and-trade worked very well when it was applied to sulfur emissions (which cause acid rain).
I’m all for some version of cap-and-trade for carbon as long as incumbent polluters or politically favored groups don’t get huge discounts on permits.
The problem with cap-and-trade in the U.S. is that it will become a huge fund for bribing special interests.
Yeah, that’s a potential risk.
That being said, I think that some of the issues that were called “bribes” made sense from a policy perspective; for example, part of the cap-and-trade bill was that for the first few years, a number of carbon credits would be given to power companies, because they didn’t want a sudden spike in the cost of electricity.
Still, even with those trade-offs, I still think it would work as a policy; even if you’re getting carbon credits for free for the first few years, you still now have a motivation to reduce emissions if you can, since then you can sell off the extra carbon credits to other companies that can’t reduce carbon emissions as easily. And those free credits would decrease every year anyway. Meanwhile, all of those credits were still included under the “cap”, the maximum total amount of carbon that could be released.
I think that a cap-and-trade policy could be an effective way to let the market figure out what the most cost-effective way to reduce carbon emissions are.
CronoDAS: Can you clarify what you mean by “worked very well”? Do you specifically mean that the policy was effective at reducing sulfur emissions? (As opposed to, e.g. saying that it reduced sulfur emissions with minimal negative short-term economic impact.)
There’s a fair bit of evidence that it both reduced sulfur dioxide emission and had little negative economic impact. See this article which discusses a lot of these issues and what can be learned going forwards about how to apply cap and trade systems for other pollutants.
Great article; thanks for the clarification!
Perhaps I should have said “designed”.
I am also wary of policies that are ostensibly for one thing (carbon taxes, cap and trade), but actually for another (giving incumbents and well-connected/non-foreign companies an advantage over newcomers/foreign companies).
That’s a general problem in politics, unfortunately—not particularly related to AGW.
True, but the point here is that to some it seems easier to argue against AGW instead of against the policies.
Agreed.
A question, at what probability level should you go around saying “I believe in X”?
The term “believe in” is used whenever someone else assigns a probability sufficiently higher than yours for you to take offense. For example, if someone assigns a negligible probability to hard takeoff and Robin Hanson assigns it a 10% probability, they will say that Robin Hanson believes in hard takeoff.
If you don’t care about being honest, you should say it when it wins to do so.
If you do care about being honest, you should say it when doing so helps give the person you’re communicating with an accurate estimate of your epistemic state.
My rule of thumb is that I believe in X when I would be surprised by not x. There is no proper probability level, though. It’s usage is just governed by social convention, not any math.
You should instead say “I believe X”. “I believe in X” is too easy to confuse with “I like X”. If you are able to introspect your probability level, you can say “I believe probability P: X”.
Better to simply say “X” and leave no room for confusion whatever.
Depends on the consequences of belief; in particular the price you’d pay for mistaken belief, vs reward of correctness.
How does it depend? Can I expect you to assign a P(X)<0.5 but still state you believe in X if the payoffs of X are right?
Sure. I do not believe that I’d crash my car if I drove today. But I’d still wear a seatbelt.
The word belief seems to be used in two different contexts.
1) “I believe that the apple is red.” This implies the apple is red. It uses belief in the encyclopaedic sense.
2) “I believe in anthropogenic global warming.” This means that the person thinks that global warming should be taken seriously in plans for the future. It could mean anything from a high probability and low consequences and low probability and high consequences.
I think that this is dangerous from a sanity point of view as they can easily get confused. I can’t think of a good word for the second meaning, perhaps matters? So you would say that AGW matters, rather than I believe in AGW.
I’m not sure what you mean by 1). The truth-value of that sentence is independent from the truth-value of the clause “the apple is red”. If the sentence is true, this does seem to entail that the person speaking it assigns a probability to the apple being red, but even in that case I see no reason to set the threshold at 1⁄2, such that the sentence is logically equivalent to “I think this apple more likely to be red than not”.
As for 2), one might say “I worry about AGW”, and you can worry about risks to which you assign far less that 1⁄2 probability, and still take action on the basis of such assessments.
“I believe in AGW” seems to have a different meaning altogether: it means roughly “I believe that the scientific case for AGW is basically sound”. To say “I am an AGW skeptic” (putatively the complement of the belief statement) is to say “I believe that the AGW theory lacks a sound scientific grounding”. The entailments of these statements have more to do with trust in scientific, political and economic institutions than they have to do with the facts of the matter: few people professing either belief have much direct knowledge of the relevant physical facts and theoretical insights.
From the wikipedia link
Holding something to be true means you can do deductive inference on it and ignore the cases where it is untrue.
So rather than flipping between being a skeptic and a believer, you can hold the position that you don’t know, but assign a probability to the proposition. It means you have to consider the consequences of what happens in both situations. In this state you should try and convince people to share your level of uncertainty and seek to reduce your uncertainty, rather than arguing purely pro or con.
This I am perfectly happy with! I would love for people to be able to say that they don’t believe the blue box holds the diamond, but still pick it anyway. I’m objecting to the devaluing of the word belief.
I’m finding this exchange quite interesting.
Again, I am uneasy with the mixing of “belief” language an “probability” language in one and the same sentence, perhaps because I’m only recently delving into probability theory and the two are not yet well integrated.
If probabilities are quantities between 0 and 1, but never either of these extremes—such that everything is a “level of uncertainty”—we can never use “belief” in the sense of ignoring or rejecting the complement entirely. (And even though: for things I’m really sure of, I tend to use “I know” rather than “I believe”.)
Based on an incomplete examination of my own writing (email archives of the past year or so), I use the phrase “I believe” to express conclusions I have provisionally reached but that new evidence could invalidate. (I use the phrase “I don’t believe” much more rarely, and so I can say that in none of my emails for the past ten years does the phrase appear with the meaning “I don’t believe in X”. Of course, per my earlier statement about modals, I don’t expect that my beliefs will actually be expressed using the phrase “I believe”.)
In that sense, “I believe” does have to do with deductions; my beliefs are the conclusions I judge to be plausible enough based on the evidence that they are worth following through to further conclusions. If I believe Brian will be at the conference in August, then I may email Brian to plan a meeting at the conference. If I believe probability theory is a useful tool, I therefore believe I should learn more about probability theory.
I’m starting to feel like we should probably chuck out our current language for dealing with states of knowledge and start over.
For example how does
mean we should be assessing the pay-offs of various actions? How much evidence should we expect to tip the scales in the other direction?
Do you normally think through the consequences of sending an email to Brian if he doesn’t happen to be going to the conference?
More than you might think. Warning: irrelevancies ahead.
I’m an introvert, i.e. I tend to only reach out to people after some agonizing over whether it’s the right thing to do. My default attitude is to clam up. Sometimes it’s hellish: I pass someone on the street, I smile and say hi, they fail to acknowledge me, and I spend the rest of the day in a blue funk over either a) what some stranger is thinking of me now or b) having suffered rejection.
Yes, it’s a fucked-up way to be. You learn to adjust. :)
But you get the point I was trying to make? A more extreme example is you don’t think through the possibility that turning on your bathroom tap might cause a negative singularity (due to it causing a unusual mixture of bacteria that have sex and form a self-aware gene network capable of creating novel regulatory pathways and recursive self improvement).
That possibility doesn’t cross your mind, and probably shouldn’t when making decisions about turning on taps. You X that turning on your tap won’t cause a singularity. I want a word for X. Believe seems to be tainted.
Yes, I think I get your point.
“I’m confident” seems a good antonym of “I worry”. I’m confident that turning on the tap is safe for humanity. I have reasonable expectations of getting water when I turn on the tap (though these are sometimes violated).
The tap example is reminiscent of the blue tentacle line in Technical Explanation. Riffing on that, beliefs perhaps correspond to scenarios you construct without really thinking about it, plausible extrapolations from actual knowledge.
The question was whether you could say that you believed in something that you thought occurred with less than 0.5 probability. If you thought you would crash your car with less than 0.5 probability, you may still wear a seatbelt, but you wouldn’t say that you believed you would crash your car.
Above you wrote:
In your three-box example, if you believed the diamond was in the blue box with p=0.4, and the other two with p=0.3 each, it would sound very strange to me to say “I believe the diamond is in the blue box.” Instead, I would say “I think it’s most likely that the diamond is in the blue box.”
The use of the word “believe” doesn’t correspond to a single probability level, and as such, isn’t very Bayesian. For instance, say there is a lottery with one million tickets, and you have one ticket. Do you believe you will not win? No, and that seems true no matter how many tickets the lottery has.
Essentially, use of the word “believe” indicates that you’re talking about an axiom, a statement that you’re using as a further assumption without questioning or looking at the probabilities.
Another way to see it: consider the case where an examiner asks you to choose between three boxes (red, green, blue) one of which contains a diamond.
Normally you would assign 1⁄3 probability to each box of containing the diamond, but you have cleverly slipped a priming cue while talking earlier with the examiner, saying “I love blue skies”, and your experience with priming is such that your probability assignments are now 1/3+epsilon for blue, and 1/3-2*epsilon for each of red and green.
We now have a situation where p(blue) < .5 but where you nevertheless “believe that” the diamond is in the blue box, insofar as that is the box you’d pick.
The verb “believe” expresses what linguists call a modality. I think we should be careful when mixing in the same sentence statements of probability and everyday language modalities, because they belong to different levels of abstraction. (Disclaimer: I am no linguist and claim only basic familiarity with concepts like modalities and the pragmatics of language. But they do seem like awesome tools, that I should learn more about just as I should learn more about probability theory.)
I wouldn’t assume that “I believe X” is the same as “it is more likely than not that X is true”. “I believe” is called an epistemic modality, one that expresses a state of knowledge. An interesting property of epistemic modalities is that they usually weaken whatever statement they are associated with. For instance, “The cat is on the mat” is a nice definite statement. If you told someone, “I believe the cat is on the mat”, they might ask “Oh, but are you sure?”. Paradoxically, even “I’m sure the cat is on the mat” would be taken as expressing less confidence than “The cat is on the mat” without a modal part.
Bruno Latour is fond of pointing out that the construction of scientific knowledge largely involves stripping modalities. You go from “Kahneman (1982) suggests that X”, to “Kahneman has shown that X”, to “It is well established that X”, to “Since X”, and finally you don’t even mention X any longer, it has merged into background knowledge.
Unfortunately, this process can occur even in the absence of any attempt to obtain evidence for X. Sometimes just by accident.
The level that wins. There’s no reason to expect this to be consistent across beliefs and contexts.
A decent paleontologist doesn’t need a creationist to ask this question. It’s the second thing that you think of after ‘wow, whose bone is that? Maybe someone’s between A and B’. And if you can actually put forward a theory, however weird, of why there’re two gaps, then you advance science. On the other hand, some questions (like the origin(s) of flowers) are so popular that more fundamental ones don’t attract due attention( llike, what the heck did MIKC-type MADS genes regulate in plants in the 100 million years before flowers appeared?) And a smart creationist would ask you that. And it’s not even evidence that you can’t obtain.
“As I once said to someone who questioned whether humans were really related to apes: “That question might have made sense when Darwin first came up with the hypothesis, but this is the twenty-first century. We can read the genes. Human beings and chimpanzees have 95% shared genetic material. It’s over.” ”
I don’t believe any scientist worth their lab-coat would ever use the phrase “It’s over”.
One of the central tenets of science is constant questioning and healthy skepticism. Statements which imply that ‘since it’s good enough to convince you it’s the end of the debate’ do not endear those of the scientific community to others.
I understand that leaving room for doubt and refusing to ever be 100% certain may be seen as a weakness that people can exploit, but these are the very principles that will make science stand the test of time and should not be casually discarded to remove short term hassle.
I will never let gut feeling blind me in the face of evidence and will always look at any new facts with an open mind—evolution or otherwise—and as a purveyor of such science I believe that it is your responsibility to do likewise.
Welcome to LessWrong!
One of the common tropes around here is that zero and one are not probabilities (that’s probability in the Bayesian sense). When Eliezer writes “it’s over”, he doesn’t mean that no evidence could convince him otherwise; he just means that he assesses the probability of observing such evidence as negligible in light of the currently available evidence. (Or at least, that’s my understanding.)
The phrase “it’s over” shouldn’t be taken to mean that we can learn no new facts; rather that we cannot go back to a previous state of ignorance.
That is not how I interpreted the statement. To me it conveyed a strong dismissal of any further discussion on the subject.
Since the context was in conversation with a skeptic who could clearly have benefited from a clear and reasoned argument but was instead presented with this comment, my opinion is that this undermines the issue.
I am willing to accept that this may not have been the intention of the statement.
Welcome to LessWrong!
I think you meant the opposite of what that entails. We usually say, “X has the ball” or “The ball is in X’s court” when we mean, “It’s X’s responsibility to do something now”. Thus, I initially read that statement to mean, “AGW proponents need to provide more evidence”. Did you mean:
1 - AGW is the best hypothesis. 2 - People act as though the burden of proof is on AGW proponents.
The ball is in X’s court --> Tennis reference. They must take action to return the ball or you lose.
X has the ball --> Football reference (pick your flavour of football—I spelt flavour with a ‘u’ so I’ll take either soccer or Aussie Rules). You have the object of importance and the other guy doesn’t. They better stop you before you can press your advantage.
This is the first I’ve heard of the second.
This post disqualifies me from calling myself a rationalist in the LW sense. Allowing other people to force me to believe stuff is a river I’d rather not cross.