Privileging the Hypothesis
Suppose that the police of Largeville, a town with a million inhabitants, are investigating a murder in which there are few or no clues—the victim was stabbed to death in an alley, and there are no fingerprints and no witnesses.
Then, one of the detectives says, “Well… we have no idea who did it… no particular evidence singling out any of the million people in this city… but let’s consider the hypothesis that this murder was committed by Mortimer Q. Snodgrass, who lives at 128 Ordinary Ln. It could have been him, after all.”
I’ll label this the fallacy of privileging the hypothesis. (Do let me know if it already has an official name—I can’t recall seeing it described.)
Now the detective may perhaps have some form of rational evidence that is not legal evidence admissible in court—hearsay from an informant, for example. But if the detective does not have some justification already in hand for promoting Mortimer to the police’s special attention—if the name is pulled entirely out of a hat—then Mortimer’s rights are being violated.
And this is true even if the detective is not claiming that Mortimer “did” do it, but only asking the police to spend time pondering that Mortimer might have done it—unjustifiably promoting that particular hypothesis to attention. It’s human nature to look for confirmation rather than disconfirmation. Suppose that three detectives each suggest their hated enemies, as names to be considered; and Mortimer is brown-haired, Frederick is black-haired, and Helen is blonde. Then a witness is found who says that the person leaving the scene was brown-haired. “Aha!” say the police. “We previously had no evidence to distinguish among the possibilities, but now we know that Mortimer did it!”
This is related to the principle I’ve started calling “locating the hypothesis,” which is that if you have a billion boxes only one of which contains a diamond (the truth), and your detectors only provide 1 bit of evidence apiece, then it takes much more evidence to promote the truth to your particular attention—to narrow it down to ten good possibilities, each deserving of our individual attention—than it does to figure out which of those ten possibilities is true. It takes 27 bits to narrow it down to ten, and just another 4 bits will give us better than even odds of having the right answer.
Thus the detective, in calling Mortimer to the particular attention of the police, for no reason out of a million other people, is skipping over most of the evidence that needs to be supplied against Mortimer.
And the detective ought to have this evidence in their possession, at the first moment when they bring Mortimer to the police’s attention at all. It may be mere rational evidence rather than legal evidence, but if there’s no evidence then the detective is harassing and persecuting poor Mortimer.
During my recent diavlog with Scott Aaronson on quantum mechanics, I did manage to corner Scott to the extent of getting Scott to admit that there was no concrete evidence whatsoever that favors a collapse postulate or single-world quantum mechanics. But, said Scott, we might encounter future evidence in favor of single-world quantum mechanics, and many-worlds still has the open question of the Born probabilities.
This is indeed what I would call the fallacy of privileging the hypothesis. There must be a trillion better ways to answer the Born question without adding a collapse postulate that would be the only non-linear, non-unitary, discontinous, non-differentiable, non-CPT-symmetric, non-local in the configuration space, Liouville’s-Theorem-violating, privileged-space-of-simultaneity-possessing, faster-than-light-influencing, acausal, informally specified law in all of physics. Something that unphysical is not worth saying out loud or even thinking about as a possibilitywithout a rather large weight of evidence—far more than the current grand total of zero.
But because of a historical accident, collapse postulates and single-world quantum mechanics are indeed on everyone’s lips and in everyone’s mind to be thought of, and so the open question of the Born probabilities is offered up (by Scott Aaronson no less!) as evidence that many-worlds can’t yet offer a complete picture of the world. Which is taken to mean that single-world quantum mechanics is still in the running somehow.
In the minds of human beings, if you can get them to think about this particular hypothesis rather than the trillion other possibilities that are no more complicated or unlikely, you really have done a huge chunk of the work of persuasion. Anything thought about is treated as “in the running,” and if other runners seem to fall behind in the race a little, it’s assumed that this runner is edging forward or even entering the lead.
And yes, this is just the same fallacy committed, on a much more blatant scale, by the theist who points out that modern science does not offer an absolutely complete explanation of the entire universe, and takes this as evidence for the existence of Jehovah. Rather than Allah, the Flying Spaghetti Monster, or a trillion other gods no less complicated—never mind the space of naturalistic explanations!
To talk about “intelligent design” whenever you point to a purported flaw or open problem in evolutionary theory is, again, privileging the hypothesis—you must have evidence already in hand that points to intelligent design specifically in order to justify raising that particular idea to our attention, rather than a thousand others.
So that’s the sane rule. And the corresponding anti-epistemology is to talk endlessly of “possibility” and how you “can’t disprove” an idea, to hope that future evidence may confirm it without presenting past evidence already in hand, to dwell and dwell on possibilities without evaluating possibly unfavorable evidence, to draw glowing word-pictures of confirming observations that could happen but haven’t happened yet, or to try and show that piece after piece of negative evidence is “not conclusive.”
Just as Occam’s Razor says that more complicated propositions require more evidence to believe, more complicated propositions also ought to require more work to raise to attention. Just as the principle of burdensome details requires that each part of a belief be separately justified, it requires that each part be separately raised to attention.
As discussed in Perpetual Motion Beliefs, faith and type 2 perpetual motion machines (water ice cubes electricity) have in common that they purport to manufacture improbability from nowhere, whether the improbability of water forming ice cubes or the improbability of arriving at correct beliefs without observation. Sometimes most of the anti-work involved in manufacturing this improbability is getting us to pay attention to an unwarranted belief—thinking on it, dwelling on it. In large answer spaces, attention without evidence is more than halfway to belief without evidence.
Someone who spends all day thinking about whether the Trinity does or does not exist, rather than Allah or Thor or the Flying Spaghetti Monster, is more than halfway to Christianity. If leaving, they’re less than half departed; if arriving, they’re more than halfway there.
An oft-encountered mode of privilege is to try to make uncertainty within a space, slop outside of that space onto the privileged hypothesis. For example, a creationist seizes on some (allegedly) debated aspect of contemporary theory, argues that scientists are uncertain about evolution, and then says, “We don’t really know which theory is right, so maybe intelligent design is right.” But the uncertainty is uncertainty within the realm of naturalistic theories of evolution—we have no reason to believe that we’ll need to leave that realm to deal with our uncertainty, still less that we would jump out of the realm of standard science and land on Jehovah in particular. That is privileging the hypothesis—taking doubt within a normal space, and trying to slop doubt out of the normal space, onto a privileged (and usually discredited) extremely abnormal target.
Similarly, our uncertainty about where the Born statistics come from should be uncertainty within the space of quantum theories that are continuous, linear, unitary, slower-than-light, local, causal, naturalistic, et cetera—the usual character of physical law. Some of that uncertainty might slop outside the standard space onto theories that violate one of these standard characteristics. It’s indeed possible that we might have to think outside the box. But single-world theories violate all these characteristics, and there is no reason to privilege that hypothesis.
- How To Write Quickly While Maintaining Epistemic Rigor by 28 Aug 2021 17:52 UTC; 451 points) (
- Feature Selection by 1 Nov 2021 0:22 UTC; 320 points) (
- Privileging the Question by 29 Apr 2013 18:30 UTC; 227 points) (
- Toward A Mathematical Framework for Computation in Superposition by 18 Jan 2024 21:06 UTC; 203 points) (
- Deep atheism and AI risk by 4 Jan 2024 18:58 UTC; 149 points) (
- Message Length by 20 Oct 2020 5:52 UTC; 134 points) (
- Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think by 27 Dec 2019 5:09 UTC; 127 points) (
- In praise of fake frameworks by 11 Jul 2017 2:12 UTC; 116 points) (
- “Outside View!” as Conversation-Halter by 24 Feb 2010 5:53 UTC; 93 points) (
- Probability space has 2 metrics by 10 Feb 2019 0:28 UTC; 88 points) (
- Gato as the Dawn of Early AGI by 15 May 2022 6:52 UTC; 85 points) (
- How I learned to stop worrying and love skill trees by 23 May 2023 4:08 UTC; 81 points) (
- The Contrarian Status Catch-22 by 19 Dec 2009 22:40 UTC; 70 points) (
- Contra Yudkowsky on Epistemic Conduct for Author Criticism by 13 Sep 2023 15:33 UTC; 69 points) (
- Why Large Bureaucratic Organizations? by 27 Aug 2024 18:30 UTC; 68 points) (
- The Correct Contrarian Cluster by 21 Dec 2009 22:01 UTC; 67 points) (
- Deep atheism and AI risk by 4 Jan 2024 18:58 UTC; 64 points) (EA Forum;
- On Overconfidence by 21 Aug 2015 2:21 UTC; 64 points) (
- Thoughts on hardware / compute requirements for AGI by 24 Jan 2023 14:03 UTC; 59 points) (
- Thiel on AI & Racing with China by 20 Aug 2024 3:19 UTC; 54 points) (
- Theories of Modularity in the Biological Literature by 4 Apr 2022 12:48 UTC; 51 points) (
- 31 Oct 2011 13:51 UTC; 48 points) 's comment on Less Wrong Couchsurfing Network by (
- Conspiracy Theories as Agency Fictions by 9 Jun 2012 15:15 UTC; 44 points) (
- A list of good heuristics that the case for AI x-risk fails by 2 Dec 2019 19:26 UTC; 43 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- Blanchard’s Dangerous Idea and the Plight of the Lucid Crossdreamer by 8 Jul 2023 18:03 UTC; 38 points) (
- A list of good heuristics that the case for AI X-risk fails by 16 Jul 2020 9:56 UTC; 25 points) (EA Forum;
- 18 Nov 2010 14:36 UTC; 25 points) 's comment on “Target audience” size for the Less Wrong sequences by (
- 25 Jun 2011 6:56 UTC; 24 points) 's comment on Exclude the supernatural? My worldview is up for grabs. by (
- How I learned to stop worrying and love skill trees by 23 May 2023 8:03 UTC; 22 points) (EA Forum;
- 7 May 2021 18:39 UTC; 22 points) 's comment on Draft report on existential risk from power-seeking AI by (EA Forum;
- Rationality of sometimes missing the point of the stated question, and of certain type of defensive reasoning by 29 Dec 2011 13:09 UTC; 22 points) (
- 8 May 2011 17:50 UTC; 22 points) 's comment on Death Note, Anonymity, and Information Theory by (
- 2 May 2021 3:25 UTC; 20 points) 's comment on Draft report on existential risk from power-seeking AI by (EA Forum;
- 23 Jan 2020 18:19 UTC; 20 points) 's comment on Concerns Surrounding CEV: A case for human friendliness first by (
- 1 Nov 2010 5:09 UTC; 19 points) 's comment on Ben Goertzel: The Singularity Institute’s Scary Idea (and Why I Don’t Buy It) by (
- Thwarting a Catholic conversion? by 18 Jun 2012 16:26 UTC; 19 points) (
- 22 Nov 2018 22:10 UTC; 19 points) 's comment on Jesus Made Me Rational (An Introduction) by (
- 3 Jan 2013 2:57 UTC; 17 points) 's comment on Rationality Quotes January 2013 by (
- 6 Dec 2011 6:58 UTC; 16 points) 's comment on On “Friendly” Immortality by (
- 28 Jul 2013 17:04 UTC; 15 points) 's comment on Anthropics and a cosmic immune system by (
- 26 Aug 2013 2:02 UTC; 14 points) 's comment on The 50 Shades of Grey Book Club by (
- 9 May 2013 15:52 UTC; 14 points) 's comment on Open Thread, May 1-14, 2013 by (
- Why would we think artists perform better on drugs ? by 30 Oct 2011 14:01 UTC; 14 points) (
- 26 May 2022 20:40 UTC; 13 points) 's comment on Benign Boundary Violations by (
- 2 Mar 2010 19:44 UTC; 13 points) 's comment on Rationality quotes: March 2010 by (
- 2 Mar 2010 18:11 UTC; 13 points) 's comment on Rationality quotes: March 2010 by (
- Why Many-Worlds Is Not The Rationally Favored Interpretation by 29 Sep 2009 5:22 UTC; 13 points) (
- 11 Aug 2013 3:33 UTC; 12 points) 's comment on What Bayesianism taught me by (
- 1 Dec 2018 22:45 UTC; 12 points) 's comment on Quantum Mechanics, Nothing to do with Consciousness by (
- 2 Aug 2019 18:02 UTC; 12 points) 's comment on Forum participation as a research strategy by (
- 5 Nov 2012 11:41 UTC; 11 points) 's comment on Voting is like donating thousands of dollars to charity by (
- 6 Aug 2022 15:14 UTC; 11 points) 's comment on Rant on Problem Factorization for Alignment by (
- 15 Jan 2014 3:13 UTC; 11 points) 's comment on Dangers of steelmanning / principle of charity by (
- 1 Jun 2012 15:52 UTC; 11 points) 's comment on How can I argue without people online and not come out feeling bad? by (
- 20 Mar 2018 22:00 UTC; 10 points) 's comment on Inference & Empiricism by (
- 21 May 2010 21:37 UTC; 9 points) 's comment on Open Thread: May 2010, Part 2 by (
- 17 Jun 2011 0:56 UTC; 9 points) 's comment on Generalizing From One Example by (
- Assuming Nails by 5 Jul 2010 22:26 UTC; 9 points) (
- 17 Dec 2010 4:01 UTC; 8 points) 's comment on How to Convince Me That 2 + 2 = 3 by (
- 16 Jan 2012 10:30 UTC; 8 points) 's comment on Open Thread, January 15-31, 2012 by (
- [Old] Mapmaking Series by 12 Mar 2019 17:32 UTC; 8 points) (
- 10 Oct 2017 2:18 UTC; 7 points) 's comment on The Hidden Cost of Shifting Away from Poverty by (EA Forum;
- 30 Sep 2015 6:10 UTC; 7 points) 's comment on Rationality Quotes Thread September 2015 by (
- 19 Feb 2012 19:20 UTC; 7 points) 's comment on Brain shrinkage in humans over past ~20 000 years—what did we lose? by (
- 13 Apr 2015 23:10 UTC; 7 points) 's comment on Open Thread, Apr. 13 - Apr. 19, 2015 by (
- 6 Nov 2010 19:11 UTC; 7 points) 's comment on Yet Another “Rational Approach To Morality & Friendly AI Sequence” by (
- 23 Oct 2011 1:03 UTC; 7 points) 's comment on [LINK] Loss of local knowledge affecting intellectual trends by (
- 15 Oct 2011 14:21 UTC; 6 points) 's comment on [LINK] Why did Steve Jobs choose not to effectively treat his cancer? by (
- 23 Jun 2011 7:57 UTC; 6 points) 's comment on [Link] Does a Simulation Really Need to Be Run? by (
- 21 Jul 2021 8:57 UTC; 6 points) 's comment on [AN #156]: The scaling hypothesis: a plan for building AGI by (
- 14 Apr 2023 19:21 UTC; 6 points) 's comment on Compendium of problems with RLHF by (
- 26 Apr 2013 3:41 UTC; 6 points) 's comment on Tactics against Pascal’s Mugging by (
- 7 Sep 2010 0:11 UTC; 6 points) 's comment on Something’s Wrong by (
- 22 Oct 2017 23:06 UTC; 5 points) 's comment on Seek Fair Expectations of Others’ Models by (
- 29 Nov 2012 1:16 UTC; 5 points) 's comment on LW Women- Minimizing the Inferential Distance by (
- 17 Mar 2011 20:55 UTC; 5 points) 's comment on Rationality Outreach: A Parable by (
- 19 Apr 2015 23:33 UTC; 5 points) 's comment on Astronomy, space exploration and the Great Filter by (
- 21 Sep 2013 8:37 UTC; 5 points) 's comment on Open thread, September 16-22, 2013 by (
- 13 Aug 2011 21:39 UTC; 5 points) 's comment on Magic Tricks Revealed: Test Your Rationality by (
- Rationality Reading Group: Part S: Quantum Physics and Many Worlds by 28 Jan 2016 1:18 UTC; 5 points) (
- 21 Jan 2012 19:35 UTC; 5 points) 's comment on Quixey Challenge—Fix a bug in 1 minute, win $100. Refer a winner, win $50. by (
- 26 Jul 2010 19:38 UTC; 4 points) 's comment on Welcome to Less Wrong! by (
- 18 Nov 2011 17:12 UTC; 4 points) 's comment on Human consciousness as a tractable scientific problem by (
- 4 Mar 2013 22:22 UTC; 4 points) 's comment on The more privileged lover by (
- 8 Apr 2011 22:19 UTC; 4 points) 's comment on Bayesian Epistemology vs Popper by (
- 18 Jul 2014 11:59 UTC; 4 points) 's comment on Confused as to usefulness of ‘consciousness’ as a concept by (
- 15 Oct 2010 7:06 UTC; 4 points) 's comment on LW favorites by (
- 20 Apr 2013 18:14 UTC; 4 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- Optimising Scientific Research by 2 Nov 2017 19:52 UTC; 4 points) (
- 17 Apr 2013 19:04 UTC; 4 points) 's comment on Physicists To Test If Universe Is A Computer Simulation (link) by (
- 20 Aug 2024 23:57 UTC; 4 points) 's comment on Thiel on AI & Racing with China by (
- 2 Nov 2010 19:05 UTC; 3 points) 's comment on What I would like the SIAI to publish by (
- 17 Feb 2013 14:29 UTC; 3 points) 's comment on Einstein’s Superpowers by (
- 24 Mar 2011 6:02 UTC; 3 points) 's comment on Why Our Kind Can’t Cooperate by (
- 27 Dec 2022 2:26 UTC; 3 points) 's comment on Take 13: RLHF bad, conditioning good. by (
- 17 Apr 2015 16:00 UTC; 3 points) 's comment on Open Thread, Apr. 13 - Apr. 19, 2015 by (
- My Skepticism by 31 Jan 2015 2:00 UTC; 3 points) (
- 22 Oct 2012 22:09 UTC; 3 points) 's comment on 2012 Less Wrong Census Survey: Call For Critiques/Questions by (
- Global warming is a better test of irrationality that theism by 16 Mar 2012 17:10 UTC; 3 points) (
- 30 Aug 2013 23:05 UTC; 3 points) 's comment on Genies and Wishes in the context of computer science by (
- 19 Jul 2012 18:39 UTC; 3 points) 's comment on [LINK] Nick Szabo: Beware Pascal’s Scams by (
- 4 Apr 2011 14:39 UTC; 3 points) 's comment on Recent de-convert saturated by religious community; advice? by (
- 3 Nov 2012 13:54 UTC; 3 points) 's comment on Original Research on Less Wrong by (
- 12 Mar 2012 4:27 UTC; 3 points) 's comment on Falsification by (
- Meetup : West LA—R:AZ Part C, Noticing Confusion by 3 Apr 2015 7:51 UTC; 2 points) (
- How Irrationality Can Win: The Power of Group Cohesion by 9 Jul 2010 6:15 UTC; 2 points) (
- 16 Mar 2010 21:06 UTC; 2 points) 's comment on Undiscriminating Skepticism by (
- 16 Mar 2010 16:30 UTC; 2 points) 's comment on Undiscriminating Skepticism by (
- 18 Jul 2011 17:11 UTC; 2 points) 's comment on The limits of introspection by (
- 23 Oct 2009 2:50 UTC; 2 points) 's comment on Rationality Quotes: October 2009 by (
- 2 Jan 2013 21:13 UTC; 2 points) 's comment on Rationalization by (
- 18 Mar 2011 4:18 UTC; 2 points) 's comment on What comes before rationality by (
- 20 Dec 2016 19:18 UTC; 2 points) 's comment on Improve comments by tagging claims by (
- 27 Mar 2012 19:58 UTC; 2 points) 's comment on Memory in the microtubules by (
- 1 Sep 2012 0:36 UTC; 1 point) 's comment on [LINK] Why did Steve Jobs choose not to effectively treat his cancer? by (
- 8 Oct 2015 5:16 UTC; 1 point) 's comment on Rationality Quotes Thread September 2015 by (
- 14 May 2014 4:04 UTC; 1 point) 's comment on What do rationalists think about the afterlife? by (
- 4 Nov 2009 22:38 UTC; 1 point) 's comment on Rolf Nelson’s “The Rational Entrepreneur” by (
- 17 Aug 2010 17:30 UTC; 1 point) 's comment on Taking Ideas Seriously by (
- 28 Jan 2011 3:11 UTC; 1 point) 's comment on Theists are wrong; is theism? by (
- 23 May 2018 18:44 UTC; 1 point) 's comment on Co-Proofs by (
- 1 Apr 2023 19:36 UTC; 1 point) 's comment on Exposure to Lizardman is Lethal by (
- 5 May 2012 12:01 UTC; 1 point) 's comment on Why do people ____? by (
- 2 Nov 2019 3:36 UTC; 1 point) 's comment on I would like to try double crux. by (
- 31 Oct 2019 4:28 UTC; 1 point) 's comment on I would like to try double crux. by (
- 4 Dec 2011 5:15 UTC; 0 points) 's comment on Why safe Oracle AI is easier than safe general AI, in a nutshell by (
- 11 Feb 2010 22:01 UTC; 0 points) 's comment on Shut Up and Divide? by (
- 29 May 2012 14:23 UTC; 0 points) 's comment on Welcome to Less Wrong! (2012) by (
- 12 Aug 2012 20:13 UTC; 0 points) 's comment on Politics Discussion Thread August 2012 by (
- 9 Apr 2011 15:08 UTC; 0 points) 's comment on What is wrong with “Traditional Rationality”? by (
- 8 Aug 2015 3:48 UTC; 0 points) 's comment on Is simplicity truth indicative? by (
- 24 Jul 2012 23:26 UTC; 0 points) 's comment on Mass-murdering neuroscience Ph.D. student by (
- 12 May 2011 20:45 UTC; -1 points) 's comment on The elephant in the room, AMA by (
- 3 Dec 2011 7:25 UTC; -1 points) 's comment on “Ray Kurzweil and Uploading: Just Say No!”, Nick Agar by (
- 23 Oct 2011 14:06 UTC; -2 points) 's comment on [LINK] Loss of local knowledge affecting intellectual trends by (
- 30 Jul 2012 21:49 UTC; -3 points) 's comment on [Retracted] Simpson’s paradox strikes again: there is no great stagnation? by (
- 8 Jan 2011 2:49 UTC; -3 points) 's comment on My story / owning one’s reasons by (
- 14 Jan 2023 10:39 UTC; -6 points) 's comment on A general comment on discussions of genetic group differences by (
- Reason and Intuition in science by 20 Dec 2019 1:35 UTC; -6 points) (
-- Assassins shows up how privileging hypotheses is done.
“Is it cold out in space, Bowie?”
“You can borrow my jumper if you like, Bowie!”
“Does the cold of deep space make your nipples get pointy, Bowie?”
“Do you use your pointy nipples as telescopic antennae to transmit data back to earth?”
“I bet you do, you freaky old bastard you!”
[...]
“Receiving transmission...from David Bowie’s nipple antennae!”
--Flight of the Conchords helps
That’s pretty far out!
Just to confirm:
Another way of explaining the ‘locating the hypothesis’ concept would be to say: “When answering a question with a large number of possible answers, it takes more work to narrow down the possibilities (generate the reasonable hypotheses) than it does to test those hypotheses for correctness.”
Is that right?
That is correct, and even more importantly “When answering a question with a large enough number of possible answers, any single possible answer will have a bigger chance of being a false positive than true positive if tested”
I agree that privileging a hypothesis is a common error. I don’t agree that it applies in the example used, though.
If you have a tradition thousands of years old saying that a particular spot was the site of Nazareth in 4BC, or of Troy in 1200BC, it isn’t irrational to privilege the hypothesis that that spot was indeed the site of Nazareth, or of Troy.
Similarly, when the entire world has used the single-world hypothesis almost exclusively until the recent past, it isn’t unfairly privileging it to still consider it a major contender.
You might think this is more like evolution vs. creationism. I don’t mean that we should keep teaching creationism in school as an alternative today. But we haven’t got as strong an argument for many-worlds as we do for evolution.
Also, AFAIK there’s just these 2 competing hypotheses: One-world, many-world. We don’t have the 7-worlds hypothesis and the 23-worlds hypothesis and the pi-worlds hypothesis. We could have the countable-worlds hypothesis and the uncountable-worlds hypothesis, but AFAIK we don’t even have those. How can you say it’s irrational to consider 1 of the only 2 hypotheses available?
Reminiscent of the guy who was asked what were the odds he would win the lottery, and replied, “Fifty-fifty, either I win or I don’t.” The corresponding heuristic-and-bias is I think known as “packing and unpacking” or something along those lines.
I remember the Daily Show had a funny example of this in action. They were interviewing people about the possibility of the Large Hadron Collider destroying the earth, and they talked to a physicist and a crazy survivalist. The former said it was impossible for the LHC to destroy the earth, while the latter used basically that argument: “There are two possibilities: it can destroy us, or not. So, that’s about a 50⁄50 chance.”
Then later the interviewer followed the survivalist to his bunker and asked him: if everyone died but them, don’t they have an obligation to mate to repopulate the earth? (They were both men.) The survivalist said, “Um, no, because that doesn’t work. It’s impossible.” And then the interviewer came back with, “well, there’s two possibilities: we’ll produce a baby, or we won’t, so that’s 50⁄50 -- pretty good odds.”
I’m sure someone would love to dig up the clip...
Sure! Didn’t take more than three years for someone to do that, either!
Though apparently your mind edited out how the interviewee’s “there’s a 0% chance it [them reproducing] will work” makes a great parallel with how John Ellis, who’s otherwise amazing in this video, earlier explains that “there is 0% chance”, “zero”, of the LHC destroying the world[*]. Sigh.
(The clip is great from start to finish, but IMHO the funniest part is what John Oliver says in answer to “This place is perfectly safe” towards the end of the video. I was going to say that the only people it could be said to make fun of are annoying nitpickers, but on reflection, it’s actually feels like a really great dig at people who make terrible arguments and want you to take them seriously, even though they really should realize the flaw themselves.)
[*] Technically, the video only suggests that it is world-destroying that Ellis claims to have “0% chance”, and this is the Daily Show, but I think we can safely assume that ths is not selective editing to make him look like a bad Bayesian.
1 vs. many is a very natural divide, not at all a good example of the packing and unpacking fallacy.
Once you accept that there exists something isomorphic to a wave function, it’s more like:
many worlds vs. many worlds and an orang-utan vs. many worlds and an apple tree vs. many worlds and a television vs. many worlds and a blue castle vs. (...) vs. many worlds and a character-of-natural-law-violating process that constantly kills all the worlds except one.
All cases except the last case contain many worlds, but Phil packed them together. I think that’s the intuition Eliezer was getting at.
We shouldn’t be afraid here to sound Orwellian. Copenhagen people believe in the many worldeaters interpretation. We believe in the no worldeaters interpretation.
Whatever is being done to the words “many worldeater interpretation” and “worldeaters interpretation” does not show up on my screen.
So true—My “8 worlds and an orang-utan” hypothesis never got the respect it deserved.
--Stan Kelly-Bootle
Proper consideration.
Props for the perseverance, man. Props ;-)
That is exactly and perfectly right and I should use this example henceforth.
I think you are demonstrating a dramatic failure to update by saying that a hypothesis held by 99.99+% of humanity, and even by most people who have thought about the issues, is not worth considering.
I’d like to know what the distribution of opinions of quantum physicists and cosmologists is.
There have been polls, with a dramatic range of support. Wikipedia leads me to the most MWI-friendly poll. I think the low-water mark is about 10% of some other group of quantum theorists. I suspect that the variation is due to wording issues and local social pressure (by “local” I mean the conference), but the page suggests different communities:
Antia Lamas saw MWI win a poll for least favorite interpretation. On that page, Michael Nielsen mentions a poll where MWI came 3rd, after Copenhagen and decoherence...but if decoherence is an interpretation, it sure sounds like MWI to me.
Why you insist on being dogmatic on this is beyond me. In your writings on the subject, you admit you don’t understand the math behind quantum mechanics, which is in fact the model. Why be so sure you are right about the interpretation of the model you don’t understand?
People look kindly on those who are humble when commenting on things outside of their expertise. People that go around making bold claims about things about which they are not that knowledgeable are labeled cranks, and rightfully so.
It’s not a major contender because of hearsay of powerful evidence like we have with legends. It’s a major because it’s been unfairly privileged ever since someone thought of it. It’s far more complicated than the hypotheses that they haven’t thought of, so by Occam’s razor, it’s far more likely to be a hypothesis that nobody’s thought of than that one.
It’s not like a legend about the city of Nazareth. It’s not even like a legend about the birth of a god. It’s like concluding that there’s a god because life has clearly been optimized, and you haven’t thought of any alternative hypotheses yet. Once Many-Worlds has been suggested, it’s like concluding there’s a good chance of there being a god, because you would have thought there was one before you thought of the alternative hypothesis.
Just because you haven’t thought of an alternative hypothesis doesn’t mean there isn’t one. It does mean that you have to discount it on the, rather high, chance that it has already been disproven. Most have. But if there’s enough alternatives, if your hypothesis is complicated enough from the beginning, there’s bound to be an alternative hypothesis that actually explains it.
I agree. It ought to have taken the first people to grasp the math at least 30 seconds to discard the Nazareth concept.
I don’t know, this “Mortimer Q. Snodgrass” fellow seems pretty suspicious to me. I mean, a weird name like that is probably an alias. And “Ordinary Lane”? At a power of 2 no less? Tell me he plays tennis and I’ll be convinced he did it.
And even if he wasn’t the murderer, he’s probably guilty of something. Check his computer for pirated music! ;)
Mortimer Snodgrass? Or maybe it was Homer Dalrymple. :) Either way, it sounds like a mad scientist’s name.
(Six year late reply, I know, but I just heard the relevant chapter of HPMOR and was curious to see if anyone else caught the reference!)
Well, I think that John Q. Wiffleheim of 1234 Norkle Rd is a more likely candidate.
I don’t know if there’s another name for privileging the hypothesis, but it seems closely related to anchoring, in that it involves establishing an unjustifiable “starting point” from which to search.
It seems like it’s a special case of anchoring to me.
Also related to availability—the very fact that it enters your conscious mind, even if you immediately discount it, is going to wear down that thought groove, making it more likely to be consciously (or subconsciously) accessed.
You are asserting a false duality. Either many-worlds, OR a collapse postulate. You use evidence AGAINST a collapse as evidence FOR many worlds, which is very weak evidence. Here is a third alternative- the wavefunction is not real- merely a mathematical formalism used to calculate probability distributions (this map doesn’t have to be the territory). Here is a fourth- collapse is an approximation to a small, non-linear self-coupling in the equation that governs time evolution. Here is a fifth- evolution is governed by both the advanced and retarded Green function solution to the Schroedinger equation, and what appears to be collapse is a sort of beat-resonance between the two. Here is a sixth- there are(non-local) degrees of freedom apart from the wavefunction and ‘collapse’ occurs because our existing theory is confused about what devices actually measure. I could keep going.
Every one of the above has a huge advantage over many worlds- there is positive evidence to update in their favor. Because they accurately reproduce most of quantum mechanics, all that evidence that we can use to push us to “quantum mechanics is probably right” CAN lead us to any of the above theories.
Many worlds does NOT have Born probabilities, and so IT DOES NOT MAKE PREDICTIONS. No one knows how to use many worlds to do anything at all. So you are doing a very weird sort of Bayesian process- you use Copenhagen or one of the above theories’ predictions to update your belief to “quantum mechanics is probably right.” Now starting from this new belief, you use other facts to update to “many worlds is probably right.” Unfortunately, you didn’t notice that in switching to many worlds, all of that evidence that pointed to quantum mechanics is gone.
If you start from an agnostic prior, many world’s has no predictions to push you in the direction of “this is the right theory.”
That actually isn’t nonsense, even if (or rather, even though) there are not only two hypotheses. Given that collapse outright excludes many worlds, evidence against collapse is evidence in favor of many worlds. It is evidence that merely becomes weaker the more additional probability mass there is for the additional hypotheses.
I retract the overly-strong word “non-sense”, I’m not sure how to markup a strike out so I merely edited the above post.
EHeller: what if the decision-theoretic approach by Wallace et al. turns out to work? Would you consider MWI “heavily” favoured then?
I think that “privileging the hypothesis” is an example of special pleading (http://en.wikipedia.org/wiki/Special_pleading) being applied to the selection of a hypothesis, as opposed to the evaluation of the hypothesis.
Something to point out to someone who says this is that ‘possibility’ is not a constraint—EVERYTHING is possible. As far as I know (and correct me if i’m wrong cause that would be a major fuckup), you can’t assign a probability of zero to anything. You can’t seperate the possible things from the impossible ones, and then focus only on the possible. ‘Possible’, by itself, applies to everything, so you don’t say anything by declaring something ‘possible’. It’s only when you start talking about degrees of possibility that the word has any meaning.
Definitional contradictions are impossible. For example, I can say that I will encounter a married bachelor, or a non-female vixen, with P=0. This doesn’t actually say anything about the world; I could figure out that there are no married bachelors without leaving in my room, simply by knowing that bachelor is defined as “man who is not married.” Mathematical truths (like 2+2=4) and non-contradiction (as mentioned deeper in this thread with the buttering of pancakes) are specific instances of definitional contradiction.
You’re generally right, though. Truths that actually involve looking at the world, i.e. ones that are not inherently about language, cannot have P=0 or P=1.
Actually, certain truths about the self and “subjective” experience may also hit P=0 and P=1. It seems I can be certain of my existence, even if I can’t be certain of what I am or what causes my existence. I can also be certain that there exists some X such that X exists. I also think I can be certain that it looks like there’s a computer screen in front of me, that my knee feels slightly uncomfortable, and that I am literate. All of this seems to have P=1; the implied causes may be false—I may not actually have knees, for example—but I certainly do seem to be having the sense-experience. I don’t think this contradicts your point in any practical sense, though.
I don’t know if what I’m about to say is a nitpick, but I think it’s relevant to the issue, so I’ll say it anyway:
But words have histories behind them, and there is a reason why the term “bachelor” exists. The term “bachelor” carries connotations that go beyond simply “union(male,~married)”. To borrow from an example from Hubert Dreyfus (and do forgive me for reading him), if I told you I was having a party and I wanted you to bring bachelors, would you consider bringing priests or gay men?
What’s actually happening is that we believe “bachelor” has one meaning, while expecting people to imagine a different clump of conceptspace (“connotation”) when we actually use it.
Only when you confine the issue into being a purely logcal one, with “bachelor”, “man”, etc. as suggestively-named LISP tokens can you identify purely logical (P = 0 or 1) truths. But at that point, you’ve destroyed the mutual information between those words and the outside world, including the usage of those terms in the outside world. And in that case, your statement is no longer about bachelors, but rather, about abstract logical relationships in Platonic space.
Another failure mode of arguing from a definition is that you could be wrong about the definition.
Exceptions to “everything is possible” include logical contradictions (such as mathematical falsehoods).
I voted you up, but I’m genuinely confused here—does the concept of probability/possibility apply to a strict, axiomatic, isolated (yet human created and thus fallible) system like mathematics?
Not all logical contradictions have to do with mathematics. For instance, it’s impossible (barring childish equivocation etc.) that a given pancake be both buttered and not buttered. Pancakes tend to have a very poor grasp of math.
Your confusion may have to do with epistemic v. metaphysical possibility. I can imagine that I am so deeply, profoundly confused about the universe that I could be mistaken about arithmetic; therefore, it’s sort of epistemically possible that two and two be five. However, as it happens, it’s not actually possible that two and two be five. Because I am part of a philosophy department in which David Lewis is practically worshiped, I’ll put it this way: I can think about two and two making five, but there’s no possible world in which that thought is reality.
It does have to do with mathematics, though. A buttered pancake is a pattern, an unbuttered pancake is a different pattern. Each pattern can be expressed as a series of bits, and the first series will not equal the second.
Are you 100% certain about that?
No. Do I need to explain epistemic possibility versus metaphysical possibility again?
I find your distinction between “epistemic” and “metaphysical” possibility to be fairly useless. Are you using a different brain when you consider “epistemic” possibility than when you consider “metaphysical” possibility? Does your concept of “metaphysical” possibility somehow not reside within your own mind, but rather “out there” somewhere in the realm of Platonic logic? That seems suspicious. And even if it were somehow “out there”, how do you know you’re not mistaken about what it is?
More to the point: can you name a single “metaphysical” possibility that does not reduce to an “epistemic” possibility when considered from the context of your own brain?
It is a metaphysical possibility that the universe is actually entirely random and only seems to us as if it were lawful. But it is not epistemically possible for me to ever be 100% certain that the universe is random, or that it is lawful.
So it appears to me that an epistemic possibility is something which exists only in my map, whereas a metaphysical possibility is something which exists in the territory and may be represented in my map. The fact that I represent both concepts within my own brain doesn’t seem to change this.
This would seem to make it useless to talk of metaphysical possibilities, seeing as there is no way to directly access the territory.
I don’t think so. I cannot directly access reality, but it still seems very useful to me to speak of it. Even if only to have something for my beliefs to try to correspond to.
In that case, you can refer directly to your map. Instead of saying, “The sky is blue,” you can say, “My map of the territory contains a blue sky.” (Naturally, this is only necessary when context requires; if you’re in an ordinary conversation, there’s no need to go that far.) To me, it seems that the only time you need to really refer to the territory is when you’re talking directly about the map-territory relationship, e.g. “As research continues, our understanding of quantum physics will hopefully increase.” But to speak descriptively of the territory is to commit the Mind Projection Fallacy. After all, there’s no difference between saying, “I believe X,” versus just “X”; the two statements convey exactly the same information, and this information pertains only to the speaker’s map, not the territory. In my view, then, all possibilities are of the “epistemic” sort. To add a second type, “metaphysical”, seems wholly unnecessary.
I am in complete agreement with what you said that I need only talk about reality to have something to check my map against.
But I believe we may be using the term “epistemic possibility” differently.
It appears to me that when you say that X is epistemically possible, that it is possible that your map can contain X (and your map is correct in this aspect). I.e. that one can have a true belief that X.
What I mean when I say that X is epistemically possible is that I have a justified true belief that X. In that sense epistemic possibility is stronger than metaphysical possibility for me in that I require the justification of the belief as well as its truth.
That’s a very interesting question. Philosophers have been arguing about the concept of possibility (and its dual, necessity) for some time.
There’s a sense of “necessary randomness”, that Chaitin has written very extensively about.
http://www.umcs.maine.edu/~chaitin/
Corresponding to this notion, there are stochastic models of (generally-agreed) necessary truths. The best known is probably the “probability of n being prime”:
http://primes.utm.edu/glossary/xpage/PrimeNumberThm.html
But there are plenty others—e.g. the 3n+1 problem
http://en.wikipedia.org/wiki/Collatz_conjecture
or the question of who wins (first or second player) an integer-parametrized family of combinatorial games.
More exotically, Neal Stephenson’s “Anathem” and Greg Egan’s short stories “Luminous” and “Dark Integers” explore the possibility that what we think of as “necessary truths” are in fact contingent truths, frozen at some point in the distant past, and exerting a pervasive influence. (Note: I think this might sound ridiculous to a logician, but moderately reasonable to a cosmologist.) It is quite difficult to tell the difference between a necessary truth and a contingent truth which has always been true.
More prosaically, we do make errors and (given things like cosmic rays and other low-level stochastic processes) it seems unlikely that any physical process could be absolutely free of errors. We might believe something to be impossible, but erroneously. Your answer to the question “Are there any necessary truths?” probably depends on your degree of Platonism.
Nice stories, but the author didn’t find the optimal solution at the end. The red arithmetic should have kept a second small island with smooth borders, inside which a small blue patch with rough borders were maintained. This would have allowed communications without any risk of war.
Part of the problem is that “possibility” has different meanings depending on context. In everyday speech, it seems to be used to indicate degrees of probability. When people declare a certain event “possible” in everyday speech, they usually mean that it has a low but nontrivial probability, given the everyday state of the world. In this sense, I might say that it’s “impossible” for me to become a NFL player, even though in a philosophical discussion we would recognize that the probability that I could become a NFL player is greater than 0.
Problems occur people equivocate between different meanings of “possibility,” or introduce a certain meaning into a type of discussion where it doesn’t belong. For instance, it’s “possible” that the Flying Spaghetti Monster exists and created the world, but this is not the kind of “possibility” that people deal with in everyday life.
I tend to think that the Bible and the Koran are sufficient evidence to draw our attention to the Jehovah and Allah hypotheses, respectively. Each is a substantial work of literature, claiming to have been inspired by direct communication from a higher power, and each has millions of adherents claiming that its teachings have made them better people. That isn’t absolute proof, of course, but it sounds to me like enough to privilege the hypotheses.
This is in fact the general problem here. If there is a large group of people claiming that some religion is true, that is quite enough evidence to call your attention to the hypothesis. That is in fact why people’s attention is called to the hypothesis: paying attention to what large groups of people say is not remotely close to inventing a random idea.
Eliezer, speaking of “privileging the hypothesis,” what do you think about the proscription in statistics against “data dredging,” or using past data to support post hoc hypotheses suggested by the data? What do you think about the view of descriptive science being inferior to hypothesis-driven science?
Based on your analysis, it would indeed seem that a hypothesis that could be located prior to an experiment might be more probable than a hypothesis that could only be located after an experiment.
Yet is there an over-emphasis placed on the state of mind of the particular experimenter prior to data collection? What if another scientist on the other side of the world had a hypothesis, which our original experimenter only came to after doing a study looking at something else. Can the second scientist say that the study confirms his hypothesis (because he held it in advance), while the first scientist cannot (because he only came to his hypothesis after doing the study)?
What if the post-hoc-hypothesized effect is very strong and related to plausible mechanisms in the field? What if it showed up in lots of previous studies that were looking at other things?
EDIT: Non-Eliezer people are invited to reply to this comment also.
There’s nothing inherently wrong with data dredging. Considering all possible hypotheses and keeping the ones suggested by the data is just Solomonoff induction. It only becomes problematic if you don’t have a consistent prior, e.g. if you keep the hypothesis with the greatest likelihood ratio rather than the greatest posterior.
Hypothesis-driven has its place in the human practice of science, because humans have a hard time computing a prior after having seen the data. But that’s a problem with the humans, not with the math.
...or if you believe everything that has p<.05.
If that were true, you would never need to hold out a validation set.
I think this is a great follow up to: http://lesswrong.com/lw/o3/superexponential_conceptspace_and_simple_words/ (scroll to the very end of that post)
Good point. Also, Yvain follows up that point with his parable that shows the hypothetical example of teaching reformed Nazis that “Untermenschen are people too”, and how that’s nearly as distorting as Nazi ideology itself.
Wait, did I just Godwin?
ideological organisations like think tanks, political parties and religions do this
“But because of a historical accident, collapse postulates and single-world quantum mechanics are indeed on everyone’s lips and in everyone’s mind to be thought of”
I think there’s more to it than historical accident. After all, it was a historical accident, of sorts, that people believed one could sail directly west from Europe to arrive in Asia, but once a continent was found in between it was no trouble at all to overturn that belief. Historical accident is not the only reason, or necessarily the major reason, that we are still struggling with single-world QM today. I think if we had started with Everett, and someone had later come up with Copenhagen, the latter would gain a nontrivial number of adherents. Note: this should not be taken as justifying Copenhagen in any way. I am merely saying that this particular analysis of the egregious error that is Copenhagen is not complete.
I have perceived exactly one world all my life. Isn’t that evidence that exactly one world exists?
But that’s exactly what you’d perceive if many worlds was true.
So, Many Worlds is a garage dragon?
If many worlds wasn’t favored by the evidence that distinguishes between the two explanations and was the more complex explanation from reality’s point of view, then yes, but I don’t think it qualifies for dragonhood under this criteria.
Surely spontaneous collapse is the garage dragon here. Zero evidence, highly unlikely.
See my top level comment.
Thanks for the link ;).
OK, on the one hand we have many-worlds. As you say, no direct subjective corroborating evidence (it’s what we’d see either way). What’s more, it’s the simplest explanation of what we see around us.
On the other hand, we have one-world. Again, ‘it’s what we’d see either way’. However, we now have to postulate an extra mechanism that causes the ‘collapse’.
I know which of these feels more like a privileged complex hypothesis pulled out of thin air, like a dragon.
Could whomever downvoted me above let me know where I’m going wrong here?
How is postulating entire worlds simpler than collapse?
Decoherence is Simple
Thank you.
Because that’s not what’s actually being postulated. What’s being postulated is “you know the basic math of QM? well… Just take that math really seriously and avoid adding too many extra rules. a CONSEQUENCE of that is many worlds.”
ie, “take the quantum amplitudes over configuration space and the linear update rule. Also keep the whole born statistics thing for now. Hopefully we’ll be able to derive it from the rest. And that’s it. Don’t add any rules about the rest of the amplitude field going to zero or any other such nonsense. Just have all QM all the time”
Voted this down, then changed my mind and undid it. This is a genuine question, the answer to which was graciously accepted. Downvoting people who need guidance to understand a concept and are ready to learn is exactly what we don’t want to do.
Worlds aren’t postulated, in the same way that cows aren’t.
It may be that privileging the hypothesis—or, more specifically, unjustifiably promoting a hypothesis about the goodness of a particular product or service to people’s attention—is the business end of TV advertising.
In general I agree, and of course Copenhagen is nonsense, but I think you privilege the hypothesis of Many-Worlds over Bohm. You see, Bohm has an explanation for the Born probabilities—they are a stable equilibrium state called, appropriately, “quantum equilibrium”. So there are not even any open questions.
And yes, Bohm is non-local, which you could say is a problem… or you could say it explains why quantum mechanics is different from classical mechanics. (Obviously no quantum theory is going to satisfy all our classical intuitions, or it wouldn’t be a quantum theory at all!)
It doesn’t fit with general relativity, you say? Yes, because none of them do. Quantum gravity doesn’t… work, not with our current formalisms. This is the central problem of modern physics (that and dark energy, which most physicists think is related somehow).
I’m not saying the Bohm interpretation is wrong (because I’m too inexperienced in the field to say), butI do not see how the above statement can be used to privilege Bohm over any other theory. If anything, shouldn’t its non-locality lower our priors on its correctness?Interpretations can’t be wrong, otherwise they would not be interpretations. They also can’t be right, for the same reason. Here I define”wrong” in the natural sciences sense, “failed an experimental test”. And that’s the only definition that matters when talking about QM (which is a natural science), as opposed to morality and stuff.
Redefining an existing word to make an unrelated point is widely considered not clever.
If interpretations cannot pass or fail an experimental test, what purpose do they serve?
(Not a rhetorical question; genuinely curious.)
They give you an excuse to not bite the bullet and accept the math as the way the universe actually is.
You might value something you can’t always see.
That didn’t really answer the question. Can you give a context-specific answer?
I believe the traditional example is a spacecraft passing over the cosmological horizon. We cannot observe this spacecraft, so the belief “things passing over the cosmological horizon cease to exist” cannot be experimentally proved or disproved. And yet, if there are large numbers of people on such a craft, their continued survival might mater a great deal to us. If we believe they will die, we will choose not to send them—which might impose heavy costs due to i.e. overpopulation.
The analogy to many-worlds seems obvious—if true, it would mean the existence of people we cannot experimentally verify. This could have implications for, say, the value of creating new minds, because they’ll already exist somewhere else.
The analogy is hand-waving. If the spacecraft has gone over the cosmological horizon, how did you ever conclude that it exists in the first place? Such a conclusion would only be possible if you observed the spacecraft before it crossed over. In other words, it passed an experimental test.
You have a spaceship. You believe that it will cease to exist if it passes the cosmological horizon. What empirical test are you failing?
I suppose I would not be failing an empirical test, but I would be going against the well established law of conservation of mass and energy, and we can conclude I am wrong with >99% certainty.
To prevent us from getting too hooked on the analogy and back to my original question, if there is a theory (Bohm) that cannot pass or fail an experimental test but does go against a well established principle (locality), why should we give it a second glance? (Again, not a rhetorical question.)
Precisely my point. The Law Of Conservation Of Energy is only well-established—empirically speaking—to hold within the observable universe. The Law Of Conservation Of Energy That I Can See is, of course, more complex, and there’s no reason to privileged the hypothesis—as long as you have some way of assigning probabilities to things you can’t observe.
Well, the Official LW Position (as endorsed by Eliezer Yudkowsky) is that you shouldn’t. And, honestly, that makes a lot of sense. Some people, however, are determined to argue that the whole question is somehow meaningless or impossible to answer.
However, the conclusion that they don’t subjectively cease to exist after we can no longer communicate with them follows unambiguously from the well-tested models of physics and cosmology. It does not require any strong extra assumptions, only some very weak ones, like that we are not in a cosmic-scale Truman show, or that the Copernican Principle holds.
By comparison, many-worlds is a strong extra assumption which has never been tested and is currently not testable (no, despite the popular misconception here, it does not follow from “just” the Schrodinger equation).
The “Truman Show Hypothesis” may violate the Copernican Principle, but it cannot be experimentally disproved.
I am not using this to argue for Many-Worlds; merely that we should care if Many-Worlds is true.
EDIT: A similar analogy would be that the ship turns into pure utilitronium, rather than vanishing. This might be a better analogy for the MWI for you.
Two questions. First, is that true as a matter of how you define ‘interpretation’, or is it true as a matter of subsequent fact? Second, do you mean to say that interpretations haven’t yet passed or failed an experimental test?; or do you mean to say that interpretations can never pass or fail an experimental test?
The ‘haven’t yet’ criterion is weakly true of ‘interpretations’ in the relevant QM context, though all the interpretations of interest here have been verified relative to empirically false models; they just haven’t been verified relative to one another. But the ‘can never’ criterion is clearly false of some of the ‘interpretations’ we’re talking about, and only contingently and ambiguously true of any of them. Whether these models will ever be empirically testable is itself an empirical question.
What are you building into ‘right’/‘wrong’ here? That is, why does your assertion have more semantic content than if you’d skipped the ‘right’/‘wrong’ assertion and just said ‘The Bohm interpretation hasn’t passed an experimental test yet. Bye now!’? Certainly if you mean to suggest that scientific models cannot have merits or dismerits aside from experimental verification/falsification, then this is wrong. Scientific models can be overly vague, or ambiguous, or internally inconsistent, or overly complex or inelegant, or unexplanatory, or gerrymandered, or ad-hoc, or unverifiable, or unfalsifiable, or historically (as opposed to experimentally) false. All of those are faults in their own right—and, often, they are Bayesianly relevant faults, faults that should impact our credence in the model.
Interpretations are designed to give the same predictions as can be inferred from a no-interpretation math, otherwise they would be called theories.
Experimentally testable new predictions. No more, no less.
This is empirically false, as a statement about how scientific discourse works. Compare string theory, which is frequently labeled a ‘theory’ (or family of theories) even though it has far more difficult-to-observe posits (one-dimensional strings!) than most (perhaps all) of the mainstream QM ‘interpretations’. See also falsified QM interpretations.
Perhaps a more model-theoretic approach would be appropriate here; clearly QM interpretations can vary quite a bit in their verifiability/falsifiability, so what distinguishes them from other theories may be that they specify the meanings of the terms in the QM formalism. On this view, ‘interpretations’ may add real content and predictions to a set of statements, provided that in the process they also fix the semantics of a large portion of the statements. After all, the problem with QM is not merely that we aren’t clear on the invisible metaphysics secretly underwriting and accounting for our experiences; we aren’t even clear on the phenomenology (appearances) or ontology (observable posits) of the theory, treated as mere formalism.
This isn’t necessarily true. Consider that the GRW interpretation has been pretty much falsified by Van Harlingen’s work at UIUC (macroscopic current superposition in SQUIDs). Most of the interpretations rely on different postulates than traditional Copenhagen quantum so there can be (and generally are) differences. However, to date, most of those differences aren’t measurable.
Similarly, we call many-worlds an “interpretation” even though no one has figured out how to actually make predictions with it. The difference between “interpretation” and “theory” is a bit loose.
I am not familiar with the GRW theory, but, like most other objective collapse models and unlike [the lowest common denominator] Copenhagen, it appears to be more than an interpretation, so no wonder that it can be falsified.
Anyway, my definition of an interpretation is “same math, same predictions, different invisible underlying ontology”. If your definition is different, feel free to state it.
Almost all “interpretations” (using the word as used in the literature) of quantum mechanics use different axioms, and its a mathematical question as to whether or not the theories make the same predictions. Many stochastic “interpretations” modify the Schroedinger equation, for instance. Even many-worlds can’t be proven as an interpretation using your definition (no one has shown it actually leads to the same predictions as Copenhagen).
Its an unfortunate artifact of the literature on various approaches quantum mechanics that the words interpretation and theory often over-load each other, but its the reality we live in.
If that’s your definition, then it’s an empirical question whether a given model is an interpretation, because it’s often contingent and difficult-to-demonstrate that a given ontology is necessarily ‘invisible’, as opposed to ‘potentially-testable-but-as-yet-untested’. Impossibility proofs about future scientific experiments are no easy task. So we can only speculate about whether Many Worlds, Bohmian Mechanics, Collapse theories, etc. are ‘interpretations’ in your sense.
(And by ‘invisible’ I gather you mean ‘empirically equivalent to a certain set of rivals’—so whether something’s an interpretation is always relative to other models, the real predicate is interpretation-compared-to-x.)
Eliezer: privileging the hypothesis is known as the Prosecutor’s fallacy. I like your name better though.
I see how this applies to different deities of the same complexity. What about the Maimonidean-type “negative theology”—http://en.wikipedia.org/wiki/Negative_theology#In_the_Jewish_tradition ? Basically this implies a perfectly simple diety “of reference class size 1”. It seems harder to say that the hypothesis is arbitrary in this case.
Are you sure?
No, that’s why I’m asking ;). The reference you point to certainly provides no immediate answer (pretty much a placeholder); I agree that simplicity can be fake, but if you define something as
“God’s existence is absolute and it includes no composition and we comprehend only the fact that He exists, not His essence. Consequently it is a false assumption to hold that He has any positive attribute… still less has He accidents (מקרה), which could be described by an attribute. Hence it is clear that He has no positive attribute whatever. The negative attributes are necessary to direct the mind to the truths which we must believe… When we say of this being, that it exists, we mean that its non-existence is impossible; it is living — it is not dead; …it is the first — its existence is not due to any cause; it has power, wisdom, and will — it is not feeble or ignorant; He is One — there are not more Gods than one… Every attribute predicated of God denotes either the quality of an action, or, when the attribute is intended to convey some idea of the Divine Being itself — and not of His actions — the negation of the opposite”
it sounds like it’s perfectly simple, by definition. BTW, even if wrong, Maimonides should get credit for recognizing the virtue of simplicity ;). This was 14th century.
Anyway, my intellectual toolkit is not sufficient to figure this out from the moment, so I am asking for help regurgitating this a little, if anyone wants to take this up as an exercise. The question does have persona significance to me.
“Perfectly simple” means “the mathematics is simple”, not “the explanation does not have many apparent details”. This so-called definition is a classic example of a mysterious answer—an actually simple deity could be described by positive attributes.
Indeed, I think we could end up calling the simple deity by his holiest of names: Math.
What properties does “Math” have that would justify calling it a “deity”?
Actually, back up: since when is mathematics (the human endeavor) simple from a mathematical perspective?
It contains the almighty hammer Mjölnir. It is omniscient. (By volume—sure, it knows all wrong things that can possibly be represented too but hey, every other deity I have studied is defined as something outright logically incoherent so they can’t talk.) So in conclusion… not much justification at all until you worship it a bit and it starts to get personified.
If you left it at this I’d say never...
… but I’ll never cease to be amazed at what a mathematian will describe as “simple” or even “trivial” when he is in his mathematical perspective groove!
So a math professor is going through the proof of a theorem on the blackboard in front of his class. Partway through, a student stops him to ask about the justification for a particular step. The professor furrows his brow, stares at the chalkboard for a moment, then walks briskly from the room. Twenty minutes later he returns, his chalk worn down to a nub, and announces triumphantly, “it’s obvious”.
I know this one as: professor walks into a class, scrawls an equation on the blackboard, and announces “I’m sure you’ll all agree that this is obvious.” Then he stops, stares at it, walks away, comes back 20 minutes later and says “Yes, that’s right, it is obvious.”
Exactly. I wonder if anyone has a good link to a particularly witty or authoritative expression of this parody. I find it warrants reference rather frequently.
It’s one of the standard famous anecdotes about Norbert Wiener.
(Oddly enough, the reason I know so much about Wiener is because Dan Simmons in Hyperion & Fall of Hyperion based Sad King Billy on him.)
Im just reading Thomas Schelling’s Theory of Conflict and one of his key tenets is that providing an identifiable point around which the discussion can be centered will tend to lead the discussion to be centered around that (classical anchoring). However, he brings out that in many cases, having a “line in the sand” brings benefits to all sides by allowing intermediate deals to be struck when only extremes were possible before.
This article, however, clearly demonstrates that having a line in the sand can be just as bad as it can be good, as it is with all of biases. However, I really recommend Schelling hit on “what is good” (in the evolutionary sense) about this phenomenon.
The following formula is difficult to read at a first glance because of the unfortunate line break: (water → ice cubes + electricity)
I was first trying to parse it as: water “minus” “is greater than” ice cubes...
I’ve heard the
foo->bar
construct in C pronounced something like that at least once.The wikipedia article for Abductive Reasoning claims this sort of privileging the hypothesis can be seen as an instance of affirming the consequent.
Good post.
I’m not sure that ‘privileging the hypothesis’ deserves to be called a fallacy, though. It’s only a bad idea because of the biases that humans happen to have. It can lead to misconceptions for us primates, but it’s not a logical error in itself, is it?
It may not be a completely generic bias or fallacy, but it certainly can affect more than just human decision processes. There are a number of primitive systems that exhibit pathologies similar to what Eliezer is describing, speech recognition systems, for example, have a huge issue almost exactly isomorphic to this. Once some interpretation of a audio wave is a hypothesis, it is chosen in great excess to it’s real probability or confidence. This is the primary weakness of rule-based voice grammars, that their pre-determined possible interpretations lead to unexpected inputs being slotted into the nearest pre-existing hypothesis, rather than leading to a novel interpretation. The use of statistical grammars to try to pound interpretations to their ‘natural’ probabilistic initial weight is an attempt to avoid this issue.
This problem is also hidden in a great many AI decision systems within the ‘hypothesis generation’ system, or equivalent. However elegant the ranking and updating system, if your initial possible list is weak, you distort your whole decisions process.
At that point we’re dealing with a full-fledged artificial heuristic and bias—the generation system is the heuristic, and the bias is the overly limited collection of hypotheses it manages to formulate for explicit attention at a given point.
I’d reserve “fallacy” for motivated or egregious cases, the sort that humans try to get away with.
Is then the ability to explicitly (at a high, abstract level) reach down to the initial hypothesis generation and include, raise, or add hypotheses for consideration always a pathology?
I can imagine a system where extremely low probability hypotheses, by virtue of complexity or special evidence required, might need to be formulated or added by high level processes, but you could simply view that as another failure of the generation system, and require that even extremely rare or novel structures of hypotheses must go through channels to avoid this kind of disturbance of natural frequencies, as it were.
It’s most definitely a fallacy. It puts forth a conclusion without sufficient evidence to justify the conclusion. Just like an argument from authority or a gambler’s fallacy.
It’s not actually putting it forth as a conclusion though—it’s just a flaw in our wetware that makes us interpret it as such. We could imagine a perfectly rational being who could accurately work out the probability of a particular person having done it, then randomly sample the population (or even work through each one in turn) looking for the killer. Our problem as humans is that once the idea is planted, we overreact to confirming evidence.
Posthoc hypothesising is only a problem when you’re using that hypothesis to analyse the same data that inspired it. Machine learning experts avoid this mistake by backtesting and foreward testing out of sample data.
Analysing unstructured data is useful for generating hypotheses, rather than for testing them to develop a model. Take computational epidemiology:
What do you think of Huw Price’s suggestion (http://www.powells.com/biblio/72-9780195117981-0) that if one allows for the possibility of advanced action, it’s possible to have paradox-free physics within a single universe, since Bell’s theorem only proved the non-existence of non-local hidden variables?
I’d call this “Mistaking the Likelihood for the Posterior”.
Poor Mortimer he always gets the blame. :)
Surely being a supervillain (possibly formerly) is worth a lot of evidence against Mortimer, though.
This isn’t a fallacy; this is just trusting the information you’re given. You’ve gamed the system to break expectations.
Someone tells me, “Don’t touch the stove when it’s red; it will burn your hand.” That’s the hypothesis. I’m assuming it’s built on some experience and knowledge. Of course I’m going to privilege this particular hypothesis instead of the many others, like “When the stove is red, you will win the lottery,” or “When the stove is red, you can’t die.” I’m trying to get up to speed on an ongoing situation, and so I’m going to trust that some hypotheses have already been discarded as useless by those who are informing me of the situation.
Someone who spends all day thinking about whether the Trinity exists is probably doing so because his parents raised him in Christianity, and so he is trusting his parents’ wisdom, or he was told about Christianity from a missionary, who he trusts. We trust experts. I don’t see this as a bad thing.
Speaking of privileging the hypothesis…
http://www.theonion.com/content/video/crime_reporter_finds_way_of
Yes, many people understand this fallacy on some level:
http://glennbeckrapedandmurderedayounggirlin1990.com/
This is essentially an instance of availability bias. Of course, the most interesting case, rather than just a declarative hypothesis elevated among the other inhabitants of the hypothesis space for that particular question, models have other effects that go far beyond merely availability.
This is because our initial model won’t just form the first thing we think of when we examine the question, but some of the very structures we use when we formulate the question. Indeed, how we handle our models is easily responsible for the majority of the biases that have been discussed here and at Overcoming Bias.
In the case of the models mentioned in this post about quantum mechanics—we can look at the first quantum mechanics interpretation as having its own version of hypothesis privilege. This means we should downgrade it. Of course we should also do the same with its immediate successor, Many Worlds (although perhaps not as much). After all, it is the interpretations we haven’t thought of which are being penalized the most by the effect of privileging the hypothesis.
But to adequately apply this discounting beyond the direct route we need to understand the way which these model affect our thinking. How do these models encourage other models being developed, and how do they blind our cognitive architecture from different avenues?
This becomes even more pronounced when more is at stake than epistemic rationality. Since these two models are also sides (even fairly politely) in cultural and political conflicts, there are further biases that arise, as parts of these ideas become further tied to status, self-esteem, social behaviors, and habits.