By “taboo” I meant this LW meme which requires that you not just replace taboo’ed words by alternate symbols, but with working definitions. So, I’m still curious about your last paragraph: how is that statement justified? Why do you, why should you, feel comfortable saying and believing it? Should that comfort level be greater than you’d have saying “Earth-life was created by directed panspermia”?
Well, recall that the tabooed word is one which I sought to apply both to the theist “goddidit” and to the atheist “unknown-natural-processes-didit”. So what definition fits that word?
So how about this: “I make that statement because no other possibility fits into my current worldview, and this one fits reasonably well”. Or, if the taboo be removed, “I can’t prove it to your satisfaction. Hell, I can’t even prove it to my satisfaction. Yet I believe it, and I consider it a reasonable thing to believe. I guess I am simply taking it on faith.”
Why not just have an amount of belief proportional to the amount of evidence? That is, wouldn’t it be more rational to say “I think naturalistic self-organized abiogenesis is the most plausible solution known, and here’s why, but I’m not so confident in it that I think other possible solutions (including some we haven’t yet thought up) are implausible” and skip all this business about worldviews and proof? Proof isn’t really all that applicable to inductive reasoning, and I’m very skeptical of the idea that “X fits with my worldview” is a good reason for any significant amount of confidence that X is true.
Why not just have an amount of belief proportional to the amount of evidence?
Because, as a Bayesian, I realize that priors matter. Belief is produced by a combination of priors and evidence.
Proof isn’t really all that applicable to inductive reasoning,
Sure it is. Proof is applicable in both deductive and inductive reasoning. What you probably meant to say is that proof is not the only thing applicable to inductive reasoning.
I think that you will find that most of the reasoning that takes place in a field like abiogenesis has more of a deductive flavor than an inductive one. There just is not that much evidence available to work with.
and I’m very skeptical of the idea that “X fits with my worldview” is a good reason for any significant amount of confidence that X is true.
Well, then how do you feel about the idea that “X does not fit with my worldview” is a good reason for a significant amount of skepticism that X is true?
Seems to me that just a little bit ago you were finding a nice fit between X = “Miller-Urey-didit” and your worldview. A fit so nice that you were confident enough to set out to tell a total stranger about it.
Well, then how do you feel about the idea that “X does not fit with my worldview” is a good reason for a significant amount of skepticism that X is true?
I was thinking of “worldview” as a system of axioms against which claims are tested. For example, a religious worldview might axiomatically state that God exists and created the universe, and so any claim which violated that axiom can be discarded out of hand.
I’m realizing now that that’s not a useful definition; I was using it as shorthand for “beliefs that other people hold that aren’t updatable, unlike of course my beliefs which are totally rational because mumble mumble and the third step is profit”.
Beliefs which cannot be updated aren’t useful, but not all beliefs which might reasonably form a “worldview” are un-Bayesian. Maybe a better way to talk about worldviews is to think about beliefs which are highly depended upon; beliefs which, if they were updated, would also cause huge re-updates of lots of beliefs farther down the dependency graph. That would include both religious beliefs and the general belief in rationality, and include both un-updateable axiomatic beliefs as well as beliefs that are rationally resistant to update because a large collection of evidence already supports them.
So, I withdraw what I said earlier. Meshing with a worldview can in fact be rational support for a hypothesis, provided the worldview itself consists of rationally supported beliefs.
Okay, with that in mind:
Seems to me that just a little bit ago you were finding a nice fit between X = “Miller-Urey-didit” and your worldview. A fit so nice that you were confident enough to set out to tell a total stranger about it.
My claim that Miller-Urey is support for the hypothesis of life naturally occurring on Earth was based on the following beliefs:
The scientific research of others is good evidence even if I don’t understand the research itself, particularly when it is highly cited
The Miller-Urey experiment demonstrated that amino acids could plausibly form in early Earth conditions
Given sufficient opportunities, these amino acids could form a self-replicating pseudo-organism, from which evolution could be bootstrapped
Based on what you’ve explained I have significantly reduced my confidence in #3. My initial confidence for #3 was too high; it was based on hearing lots of talk about Miller-Urey amino acids being the building blocks of life, when I had not actually heard of specific paths for such formation that are confidently accepted by experts in the field as plausible.
Okay, so my conclusion has been adjusted (thanks!), but to bring it back to the earlier point: what about worldviews? Of the above, I think only #1 could be said to have to do with worldviews, and I still think it’s reasonable. As with your stereo amplifier example, even though I may not know enough about a subject to understand the literature myself, I can still estimate fairly well whether people who do claim to know enough about it are being scientific or pseudo-scientific, based on testability and lack of obviously fallacious reasoning.
Mis-application of that principle led me to my mistake with #3, but I think the principle itself stands.
Beliefs which cannot be updated aren’t useful, but not all beliefs which might reasonably form a “worldview” are un-Bayesian. Maybe a better way to talk about worldviews is to think about beliefs which are highly depended upon; beliefs which, if they were updated, would also cause huge re-updates of lots of beliefs farther down the dependency graph.
Yes.
Beliefs have hierarchy, and some are more top-level than others. One of the most top-level beliefs being:
a vast superintelligence exists
it has created/effected/influenced our history
If you give high weight to 1, then 2 follows and is strengthened, and this naturally guides your search for explanations for mysteries. A top-level belief sends down a massive cascade of priors that can effect how you interpret everything else.
If you hold the negation of 1 and or 2 as top-level beliefs then you look for natural explanations for everything. Arguably the negation of ‘goddidit’ as a top-level belief was a major boon to science because it tends to align with ockham’s razor.
But at the end of the day it’s not inherently irrational to hold these top-level beliefs. Francis Crick for instance looked at the origin of life problem and decided an unnatural explanation involving a superintelligence (alien) was actually a better fit.
A worldview comes into play when one jumps to #3 with Miller-Urey because it fits with one’s top-level priors. Our brain is built around hierarchical induction, so we always have top-level biases. This isn’t really an inherent weakness as there probably is no better (more efficient) way to do it. But it is still something to be aware of.
But at the end of the day it’s not inherently irrational to hold these top-level beliefs. Francis Crick for instance...
But, I don’t think Crick was talking about a “vast superintelligence”. In his paper, he talks about extraterrestrials sending out unmanned long-range spacecraft, not anything requiring what I think he or you would call superintelligence. In fact, he predicted that we would have that technology within “a few decades”, though rocket science isn’t among his many fields of expertise so I take that with a grain of salt.
A worldview comes into play when one jumps to #3 with Miller-Urey because it fits with one’s top-level priors.
I don’t think that’s quite what happened to me, though; the issue was that it didn’t fit my top-level priors. The solution wasn’t to adjust my worldview belief but to apply it more rationally; I ran into an akrasia problem and concluded #3 because I hadn’t examined my evidence well enough according to even my own standards.
The scientific research of others is good evidence even if I don’t understand the research itself, particularly when it is highly cited
Yeah, it sure sounds like a reasonable principle, doesn’t it? What could possibly be wrong with trusting something which gets mentioned so often? Well, as a skeptic who, by definition rejects arguments which get cited a lot, what do you think could be wrong with that maxim? Is it possibly something about the motivation of the people doing the citing?
What could possibly be wrong with trusting something which gets mentioned so often?
The quality of the cites is important, not just the quantity.
It’s possible for experts to be utterly wrong, even in their own field of expertise, even when they are very confident in their claims and seem to have good reason to be. However, it seems to me that the probability of that decreases with how testable their results are, the amount and quality of expertise they have, and the degree to which other experts legitimately agree with them (i.e. not just nodding along, but substantiating the claim with their own knowledge).
Since I’m not an expert in the given field, my ability to evaluate these things is limited and not entirely trustworthy. However, since I’m familiar with the most basic ideas of science and rationality, I ought to be able to differentiate pseudo-science from science pretty well, particularly if the pseudo-science is very irrational, or if the science is very sound.
That I had a mistaken impression about the implications of Miller-Urey, wherein I confused pop-science with real science, decreases my confidence that I’ve been generally doing it right. However, I still think the principles I listed above make sense, and that my primary error was in failing to notice the assumption I was making re: smoke → fire.
Excellent summary, I think. I have just a few things to add.
… it seems to me that the probability of that decreases with how testable their results are …
A claim that the (very real) process that Miller discovered was actually involved in the (also very real, but unknown) process by which life originated is pretty much the ultimate in untestable claims in science.
… the amount and quality of expertise they have …
In my own reading in this area, I quickly noticed that when the Miller experiment was cited in an origin-of-life chapter in a book that is really about something else, it was mentioned as if it were important science. But when it is mentioned in a book about the origin of life, then it is mentioned as intellectual history, almost in the way that chemistry books mention alchemy and phlogiston.
In other words, you can trust people like Orgel with expertise in this area to give you a better picture of the real state-of-knowledge, than someone like Paul Davies, say, who may be an expert on the Big Bang, but also includes chapters on origin-of-life and origin-of-man because it helps to sell more books.
By “taboo” I meant this LW meme which requires that you not just replace taboo’ed words by alternate symbols, but with working definitions. So, I’m still curious about your last paragraph: how is that statement justified? Why do you, why should you, feel comfortable saying and believing it? Should that comfort level be greater than you’d have saying “Earth-life was created by directed panspermia”?
Well, recall that the tabooed word is one which I sought to apply both to the theist “goddidit” and to the atheist “unknown-natural-processes-didit”. So what definition fits that word?
So how about this: “I make that statement because no other possibility fits into my current worldview, and this one fits reasonably well”. Or, if the taboo be removed, “I can’t prove it to your satisfaction. Hell, I can’t even prove it to my satisfaction. Yet I believe it, and I consider it a reasonable thing to believe. I guess I am simply taking it on faith.”
Why not just have an amount of belief proportional to the amount of evidence? That is, wouldn’t it be more rational to say “I think naturalistic self-organized abiogenesis is the most plausible solution known, and here’s why, but I’m not so confident in it that I think other possible solutions (including some we haven’t yet thought up) are implausible” and skip all this business about worldviews and proof? Proof isn’t really all that applicable to inductive reasoning, and I’m very skeptical of the idea that “X fits with my worldview” is a good reason for any significant amount of confidence that X is true.
Because, as a Bayesian, I realize that priors matter. Belief is produced by a combination of priors and evidence.
Sure it is. Proof is applicable in both deductive and inductive reasoning. What you probably meant to say is that proof is not the only thing applicable to inductive reasoning.
I think that you will find that most of the reasoning that takes place in a field like abiogenesis has more of a deductive flavor than an inductive one. There just is not that much evidence available to work with.
Well, then how do you feel about the idea that “X does not fit with my worldview” is a good reason for a significant amount of skepticism that X is true?
Seems to me that just a little bit ago you were finding a nice fit between X = “Miller-Urey-didit” and your worldview. A fit so nice that you were confident enough to set out to tell a total stranger about it.
I was thinking of “worldview” as a system of axioms against which claims are tested. For example, a religious worldview might axiomatically state that God exists and created the universe, and so any claim which violated that axiom can be discarded out of hand.
I’m realizing now that that’s not a useful definition; I was using it as shorthand for “beliefs that other people hold that aren’t updatable, unlike of course my beliefs which are totally rational because mumble mumble and the third step is profit”.
Beliefs which cannot be updated aren’t useful, but not all beliefs which might reasonably form a “worldview” are un-Bayesian. Maybe a better way to talk about worldviews is to think about beliefs which are highly depended upon; beliefs which, if they were updated, would also cause huge re-updates of lots of beliefs farther down the dependency graph. That would include both religious beliefs and the general belief in rationality, and include both un-updateable axiomatic beliefs as well as beliefs that are rationally resistant to update because a large collection of evidence already supports them.
So, I withdraw what I said earlier. Meshing with a worldview can in fact be rational support for a hypothesis, provided the worldview itself consists of rationally supported beliefs.
Okay, with that in mind:
My claim that Miller-Urey is support for the hypothesis of life naturally occurring on Earth was based on the following beliefs:
The scientific research of others is good evidence even if I don’t understand the research itself, particularly when it is highly cited
The Miller-Urey experiment demonstrated that amino acids could plausibly form in early Earth conditions
Given sufficient opportunities, these amino acids could form a self-replicating pseudo-organism, from which evolution could be bootstrapped
Based on what you’ve explained I have significantly reduced my confidence in #3. My initial confidence for #3 was too high; it was based on hearing lots of talk about Miller-Urey amino acids being the building blocks of life, when I had not actually heard of specific paths for such formation that are confidently accepted by experts in the field as plausible.
Okay, so my conclusion has been adjusted (thanks!), but to bring it back to the earlier point: what about worldviews? Of the above, I think only #1 could be said to have to do with worldviews, and I still think it’s reasonable. As with your stereo amplifier example, even though I may not know enough about a subject to understand the literature myself, I can still estimate fairly well whether people who do claim to know enough about it are being scientific or pseudo-scientific, based on testability and lack of obviously fallacious reasoning.
Mis-application of that principle led me to my mistake with #3, but I think the principle itself stands.
Yes.
Beliefs have hierarchy, and some are more top-level than others. One of the most top-level beliefs being:
a vast superintelligence exists
it has created/effected/influenced our history
If you give high weight to 1, then 2 follows and is strengthened, and this naturally guides your search for explanations for mysteries. A top-level belief sends down a massive cascade of priors that can effect how you interpret everything else.
If you hold the negation of 1 and or 2 as top-level beliefs then you look for natural explanations for everything. Arguably the negation of ‘goddidit’ as a top-level belief was a major boon to science because it tends to align with ockham’s razor.
But at the end of the day it’s not inherently irrational to hold these top-level beliefs. Francis Crick for instance looked at the origin of life problem and decided an unnatural explanation involving a superintelligence (alien) was actually a better fit.
A worldview comes into play when one jumps to #3 with Miller-Urey because it fits with one’s top-level priors. Our brain is built around hierarchical induction, so we always have top-level biases. This isn’t really an inherent weakness as there probably is no better (more efficient) way to do it. But it is still something to be aware of.
But, I don’t think Crick was talking about a “vast superintelligence”. In his paper, he talks about extraterrestrials sending out unmanned long-range spacecraft, not anything requiring what I think he or you would call superintelligence. In fact, he predicted that we would have that technology within “a few decades”, though rocket science isn’t among his many fields of expertise so I take that with a grain of salt.
I don’t think that’s quite what happened to me, though; the issue was that it didn’t fit my top-level priors. The solution wasn’t to adjust my worldview belief but to apply it more rationally; I ran into an akrasia problem and concluded #3 because I hadn’t examined my evidence well enough according to even my own standards.
Yeah, it sure sounds like a reasonable principle, doesn’t it? What could possibly be wrong with trusting something which gets mentioned so often? Well, as a skeptic who, by definition rejects arguments which get cited a lot, what do you think could be wrong with that maxim? Is it possibly something about the motivation of the people doing the citing?
The quality of the cites is important, not just the quantity.
It’s possible for experts to be utterly wrong, even in their own field of expertise, even when they are very confident in their claims and seem to have good reason to be. However, it seems to me that the probability of that decreases with how testable their results are, the amount and quality of expertise they have, and the degree to which other experts legitimately agree with them (i.e. not just nodding along, but substantiating the claim with their own knowledge).
Since I’m not an expert in the given field, my ability to evaluate these things is limited and not entirely trustworthy. However, since I’m familiar with the most basic ideas of science and rationality, I ought to be able to differentiate pseudo-science from science pretty well, particularly if the pseudo-science is very irrational, or if the science is very sound.
That I had a mistaken impression about the implications of Miller-Urey, wherein I confused pop-science with real science, decreases my confidence that I’ve been generally doing it right. However, I still think the principles I listed above make sense, and that my primary error was in failing to notice the assumption I was making re: smoke → fire.
Excellent summary, I think. I have just a few things to add.
A claim that the (very real) process that Miller discovered was actually involved in the (also very real, but unknown) process by which life originated is pretty much the ultimate in untestable claims in science.
In my own reading in this area, I quickly noticed that when the Miller experiment was cited in an origin-of-life chapter in a book that is really about something else, it was mentioned as if it were important science. But when it is mentioned in a book about the origin of life, then it is mentioned as intellectual history, almost in the way that chemistry books mention alchemy and phlogiston.
In other words, you can trust people like Orgel with expertise in this area to give you a better picture of the real state-of-knowledge, than someone like Paul Davies, say, who may be an expert on the Big Bang, but also includes chapters on origin-of-life and origin-of-man because it helps to sell more books.