What is it that you now understand, that you didn’t before?
That is annoyingly difficult to describe. Of central importance, I think, is the notion of privileging the hypothesis, and what that really means. Why what we naively consider “evidence” for a position, really isn’t.
ISTM that this is the core of grasping Bayesianism: not understanding what reasoning is, so much as understanding why what we all naively think is reasoning and evidence, usually isn’t.
Have you come across the post by that name? Without reading that it may be hard to reverse engineer the meaning from the jargon.
The intro gives a solid intuitive description:
Suppose that the police of Largeville, a town with a million inhabitants, are investigating a murder in which there are few or no clues—the victim was stabbed to death in an alley, and there are no fingerprints and no witnesses.
Then, one of the detectives says, “Well… we have no idea who did it… no particular evidence singling out any of the million people in this city… but let’s consider the hypothesis that this murder was committed by Mortimer Q. Snodgrass, who lives at 128 Ordinary Ln. It could have been him, after all.”
That is privileging the hypothesis. When you start looking for evidence and taking an idea seriously when you have no good reason to consider it instead of countless others that are just as likely.
I have come across that post, and the story of the murder investigation, and I have an understanding of what the term means.
The obvious answer to the murder quote is that you look harder for evidence around the crimescene, and go where the evidence leads, and there only. The more realistic answer is that you look for recent similar murders, for people who had a grudge against the dead person, for criminals known to commit murder in that city… and use those to progress the investigation because those are useful places to start.
I’m wondering what pjeby has realised, which turns this naive yet straightforward understanding into wrongthought worth commenting on.
If evidence is not facts which reveal some result-options to be more likely true and others less likely true, then what is it?
I’m wondering what pjeby has realised, which turns this naive yet straightforward understanding into wrongthought worth commenting on.
Consider a hypothesis, H1. If a piece of evidence E1 is consistent with H, the naive interpretation is that E1 is an argument in favor of H1.
In truth, this isn’t an argument in favor of H1 -- it’s merely the absence of an argument against H1.
That, in a nutshell, is the difference between Bayesian reasoning and naive argumentation—also known as “confirmation bias”.
To really prove H1, you need to show that E1 wouldn’t happen under H2, H3, etc., and you need to look for disconfirmations D1, D2, etc. that would invalidate H1, to make sure they’re not there.
Before I really grokked Bayesianism, the above all made logical sense to me, but it didn’t seem as important as Eliezer claimed. It seemed like just another degree of rigor, rather than reasoning of a different quality.
Now that I “get it”, the other sort of evidence seems more-obviously inadequate—not just lower-quality evidence, but non-evidence.
ISTM that this is a good way to test at least one level of how well you grasp Bayes: does simple supporting evidence still feel like evidence to you? If so, you probably haven’t “gotten” it yet.
The obvious answer to the murder quote is that you look harder for evidence around the crimescene, and go where the evidence leads, and there only. The more realistic answer is that you look for recent similar murders, for people who had a grudge against the dead person, for criminals known to commit murder in that city… and use those to progress the investigation because those are useful places to start.
I’m wondering what pjeby has realised, which turns this naive yet straightforward understanding into wrongthought worth commenting on.
That isn’t a wrongthought. Factors like you mention here are all good reason to assign credence to a hypothesis.
If evidence is not facts which reveal some result-options to be more likely true and others less likely true, then what is it?
Yes, no, maybe… that is exactly what it is! An example of an error would be having some preferred opinion and then finding all the evidence that supports that particular opinion. Or, say, encountering a piece of of evidence and noticing that it supports your favourite position but neglecting that it supports positions X, Y and Z just as well.
That is annoyingly difficult to describe. Of central importance, I think, is the notion of privileging the hypothesis, and what that really means. Why what we naively consider “evidence” for a position, really isn’t.
ISTM that this is the core of grasping Bayesianism: not understanding what reasoning is, so much as understanding why what we all naively think is reasoning and evidence, usually isn’t.
That hasn’t really helped… would you try again?
(What does privileging the hypothesis really mean? and why is reasoning and evidence usually … not?)
Have you come across the post by that name? Without reading that it may be hard to reverse engineer the meaning from the jargon.
The intro gives a solid intuitive description:
That is privileging the hypothesis. When you start looking for evidence and taking an idea seriously when you have no good reason to consider it instead of countless others that are just as likely.
I have come across that post, and the story of the murder investigation, and I have an understanding of what the term means.
The obvious answer to the murder quote is that you look harder for evidence around the crimescene, and go where the evidence leads, and there only. The more realistic answer is that you look for recent similar murders, for people who had a grudge against the dead person, for criminals known to commit murder in that city… and use those to progress the investigation because those are useful places to start.
I’m wondering what pjeby has realised, which turns this naive yet straightforward understanding into wrongthought worth commenting on.
If evidence is not facts which reveal some result-options to be more likely true and others less likely true, then what is it?
Consider a hypothesis, H1. If a piece of evidence E1 is consistent with H, the naive interpretation is that E1 is an argument in favor of H1.
In truth, this isn’t an argument in favor of H1 -- it’s merely the absence of an argument against H1.
That, in a nutshell, is the difference between Bayesian reasoning and naive argumentation—also known as “confirmation bias”.
To really prove H1, you need to show that E1 wouldn’t happen under H2, H3, etc., and you need to look for disconfirmations D1, D2, etc. that would invalidate H1, to make sure they’re not there.
Before I really grokked Bayesianism, the above all made logical sense to me, but it didn’t seem as important as Eliezer claimed. It seemed like just another degree of rigor, rather than reasoning of a different quality.
Now that I “get it”, the other sort of evidence seems more-obviously inadequate—not just lower-quality evidence, but non-evidence.
ISTM that this is a good way to test at least one level of how well you grasp Bayes: does simple supporting evidence still feel like evidence to you? If so, you probably haven’t “gotten” it yet.
That is from ‘You can’t prove the null by not rejecting it’.
That isn’t a wrongthought. Factors like you mention here are all good reason to assign credence to a hypothesis.
Yes, no, maybe… that is exactly what it is! An example of an error would be having some preferred opinion and then finding all the evidence that supports that particular opinion. Or, say, encountering a piece of of evidence and noticing that it supports your favourite position but neglecting that it supports positions X, Y and Z just as well.