Update Yourself Incrementally
Politics is the mind-killer. Debate is war, arguments are soldiers. There is the temptation to search for ways to interpret every possible experimental result to confirm your theory, like securing a citadel against every possible line of attack. This you cannot do. It is mathematically impossible. For every expectation of evidence, there is an equal and opposite expectation of counterevidence.
But it’s okay if your cherished belief isn’t perfectly defended. If the hypothesis is that the coin comes up heads 95% of the time, then one time in twenty you will expect to see what looks like contrary evidence. This is okay. It’s normal. It’s even expected, so long as you’ve got nineteen supporting observations for every contrary one. A probabilistic model can take a hit or two, and still survive, so long as the hits don’t keep on coming in.2
Yet it is widely believed, especially in the court of public opinion, that a true theory can have no failures and a false theory no successes.
You find people holding up a single piece of what they conceive to be evidence, and claiming that their theory can “explain” it, as though this were all the support that any theory needed. Apparently a false theory can have no supporting evidence; it is impossible for a false theory to fit even a single event. Thus, a single piece of confirming evidence is all that any theory needs.
It is only slightly less foolish to hold up a single piece of probabilistic counterevidence as disproof, as though it were impossible for a correct theory to have even a slight argument against it. But this is how humans have argued for ages and ages, trying to defeat all enemy arguments, while denying the enemy even a single shred of support. People want their debates to be one-sided; they are accustomed to a world in which their preferred theories have not one iota of antisupport. Thus, allowing a single item of probabilistic counterevidence would be the end of the world.
I just know someone in the audience out there is going to say, “But you can’t concede even a single point if you want to win debates in the real world! If you concede that any counterarguments exist, the Enemy will harp on them over and over—you can’t let the Enemy do that! You’ll lose! What could be more viscerally terrifying than that?”
Whatever. Rationality is not for winning debates, it is for deciding which side to join. If you’ve already decided which side to argue for, the work of rationality is done within you, whether well or poorly. But how can you, yourself, decide which side to argue? If choosing the wrong side is viscerally terrifying, even just a little viscerally terrifying, you’d best integrate all the evidence.
Rationality is not a walk, but a dance. On each step in that dance your foot should come down in exactly the correct spot, neither to the left nor to the right. Shifting belief upward with each iota of confirming evidence. Shifting belief downward with each iota of contrary evidence. Yes, down. Even with a correct model, if it is not an exact model, you will sometimes need to revise your belief down.
If an iota or two of evidence happens to countersupport your belief, that’s okay. It happens, sometimes, with probabilistic evidence for non-exact theories. (If an exact theory fails, you are in trouble!) Just shift your belief downward a little—the probability, the odds ratio, or even a nonverbal weight of credence in your mind. Just shift downward a little, and wait for more evidence. If the theory is true, supporting evidence will come in shortly, and the probability will climb again. If the theory is false, you don’t really want it anyway.
The problem with using black-and-white, binary, qualitative reasoning is that any single observation either destroys the theory or it does not. When not even a single contrary observation is allowed, it creates cognitive dissonance and has to be argued away. And this rules out incremental progress; it rules out correct integration of all the evidence. Reasoning probabilistically, we realize that on average, a correct theory will generate a greater weight of support than countersupport. And so you can, without fear, say to yourself: “This is gently contrary evidence, I will shift my belief downward.” Yes, down. It does not destroy your cherished theory. That is qualitative reasoning; think quantitatively.
For every expectation of evidence, there is an equal and opposite expectation of counterevidence. On every occasion, you must, on average, anticipate revising your beliefs downward as much as you anticipate revising them upward. If you think you already know what evidence will come in, then you must already be fairly sure of your theory—probability close to 1—which doesn’t leave much room for the probability to go further upward. And however unlikely it seems that you will encounter disconfirming evidence, the resulting downward shift must be large enough to precisely balance the anticipated gain on the other side. The weighted mean of your expected posterior probability must equal your prior probability.
How silly is it, then, to be terrified of revising your probability downward, if you’re bothering to investigate a matter at all? On average, you must anticipate as much downward shift as upward shift from every individual observation.
It may perhaps happen that an iota of antisupport comes in again, and again and again, while new support is slow to trickle in. You may find your belief drifting downward and further downward. Until, finally, you realize from which quarter the winds of evidence are blowing against you. In that moment of realization, there is no point in constructing excuses. In that moment of realization, you have already relinquished your cherished belief. Yay! Time to celebrate! Pop a champagne bottle or send out for pizza! You can’t become stronger by keeping the beliefs you started with, after all.
- LW Team is adjusting moderation policy by 4 Apr 2023 20:41 UTC; 304 points) (
- Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists by 24 Sep 2019 4:12 UTC; 299 points) (
- EA should blurt by 22 Nov 2022 21:57 UTC; 155 points) (EA Forum;
- New User’s Guide to LessWrong by 17 May 2023 0:55 UTC; 87 points) (
- Taking Ideas Seriously by 13 Aug 2010 16:50 UTC; 81 points) (
- Automatic Rate Limiting on LessWrong by 23 Jun 2023 20:19 UTC; 77 points) (
- Contra Yudkowsky on Epistemic Conduct for Author Criticism by 13 Sep 2023 15:33 UTC; 69 points) (
- Elements of Rationalist Discourse by 14 Feb 2023 3:39 UTC; 68 points) (EA Forum;
- What (standalone) LessWrong posts would you recommend to most EA community members? by 9 Feb 2022 0:31 UTC; 67 points) (EA Forum;
- What’s with all the bans recently? by 4 Apr 2024 6:16 UTC; 64 points) (
- LA-602 vs. RHIC Review by 19 Jun 2008 10:00 UTC; 62 points) (
- Updates Thread by 9 Sep 2020 4:34 UTC; 56 points) (
- Notes on Henrich’s “The WEIRDest People in the World” (2020) by 25 Mar 2021 5:04 UTC; 44 points) (EA Forum;
- Failures in technology forecasting? A reply to Ord and Yudkowsky by 8 May 2020 12:41 UTC; 44 points) (
- Demands for Particular Proof: Appendices by 15 Feb 2010 7:58 UTC; 40 points) (
- [Feedback please] New User’s Guide to LessWrong by 25 Apr 2023 18:54 UTC; 38 points) (
- 2 Apr 2015 6:10 UTC; 34 points) 's comment on Rationality Quotes Thread April 2015 by (
- 30 Oct 2011 11:21 UTC; 32 points) 's comment on Politics is the Mind-Killer by (
- Rationalists should beware rationalism by 6 Apr 2009 14:16 UTC; 32 points) (
- Notes on “Bioterror and Biowarfare” (2006) by 1 Mar 2021 9:42 UTC; 29 points) (EA Forum;
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles by 2 Mar 2024 22:05 UTC; 26 points) (
- What does the word “collaborative” mean in the phrase “collaborative truthseeking”? by 26 Jun 2019 5:26 UTC; 26 points) (
- Red Flags for Rationalization by 14 Jan 2020 7:34 UTC; 25 points) (
- Singletons Rule OK by 30 Nov 2008 16:45 UTC; 23 points) (
- Perceptual Entropy and Frozen Estimates by 3 Jun 2015 19:27 UTC; 23 points) (
- How to Be Oversurprised by 7 Jan 2013 4:02 UTC; 20 points) (
- Zen and Rationality: Skillful Means by 21 Nov 2020 2:38 UTC; 18 points) (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- Notes on Henrich’s “The WEIRDest People in the World” (2020) by 14 Feb 2021 8:40 UTC; 18 points) (
- 2 Jan 2021 4:21 UTC; 14 points) 's comment on MichaelA’s Quick takes by (EA Forum;
- 11 Aug 2013 16:23 UTC; 14 points) 's comment on What Bayesianism taught me by (
- 13 Apr 2023 20:13 UTC; 13 points) 's comment on On “aiming for convergence on truth” by (
- [SEQ RERUN] Update Yourself Incrementally by 20 Jul 2011 4:22 UTC; 13 points) (
- 10 Feb 2021 2:31 UTC; 10 points) 's comment on MichaelA’s Quick takes by (EA Forum;
- [SEQ RERUN] One Argument Against An Army by 21 Jul 2011 19:09 UTC; 10 points) (
- Notes on “Bioterror and Biowarfare” (2006) by 2 Mar 2021 0:43 UTC; 10 points) (
- Consider tabooing “I think” by 12 Nov 2024 2:00 UTC; 8 points) (
- 3 Sep 2020 17:09 UTC; 7 points) 's comment on Reducing long-term risks from malevolent actors by (EA Forum;
- Rationality Reading Group: Part G: Against Rationalization by 12 Aug 2015 22:09 UTC; 7 points) (
- 12 Jul 2023 19:38 UTC; 6 points) 's comment on Blanchard’s Dangerous Idea and the Plight of the Lucid Crossdreamer by (
- Updating on hypotheticals by 6 Nov 2015 11:49 UTC; 6 points) (
- 14 Feb 2011 22:35 UTC; 6 points) 's comment on Secure Your Beliefs by (
- 11 Apr 2023 21:34 UTC; 6 points) 's comment on LessWrong moderation messaging container by (
- 8 Aug 2011 20:21 UTC; 5 points) 's comment on The elephant in the room, AMA by (
- 11 Apr 2023 21:35 UTC; 5 points) 's comment on LessWrong moderation messaging container by (
- 18 Jun 2013 19:07 UTC; 4 points) 's comment on Is our continued existence evidence that Mutually Assured Destruction worked? by (
- 22 Mar 2012 8:25 UTC; 4 points) 's comment on What epistemic hygiene norms should there be? by (
- 16 Jul 2023 23:32 UTC; 4 points) 's comment on A Hill of Validity in Defense of Meaning by (
- 13 Dec 2020 4:42 UTC; 4 points) 's comment on Rafael Harth’s Shortform by (
- 14 Nov 2014 15:32 UTC; 4 points) 's comment on Intentionally Raising the Sanity Waterline by (
- 26 Aug 2020 18:23 UTC; 3 points) 's comment on Reducing long-term risks from malevolent actors by (EA Forum;
- 14 Nov 2014 15:35 UTC; 3 points) 's comment on Intentionally Raising the Sanity Waterline by (
- 5 May 2016 17:55 UTC; 2 points) 's comment on Collaborative Truth-Seeking by (EA Forum;
- 15 Nov 2014 18:47 UTC; 1 point) 's comment on Intentionally Raising the Sanity Waterline by (
- 1 Feb 2020 21:40 UTC; 0 points) 's comment on REVISED: A drowning child is hard to find by (
- 17 Aug 2013 21:45 UTC; 0 points) 's comment on Rationality Quotes: February 2010 by (
- 8 Jul 2011 18:32 UTC; -7 points) 's comment on Rationality Quotes July 2011 by (
If you’ve already decided which side to argue for, the work of rationality is done within you, whether well or poorly. But how can you, yourself, decide which side to argue? If choosing the wrong side is viscerally terrifying, even just a little viscerally terrifying, you’d best integrate all the evidence.
OK, here, now go take your own advice. As an academic imprint it’s pretty expensive, so if you can’t find it in your local university library I’ll snail-mail you some relevant extracts.
Matthew C:
I don’t understand why the Million Dollar Challenge hasn’t been won. I’ve spent some time in the JREF forums and as far as I can see the challenge is genuine and should be easily winnable by anyone with powers you accept. The remote viewing, for instance, that I see on your blog. That’s trivial to turn into a good protocol. Why doesn’t someone just go ahead and prove these things exist? It’d be good for everyone involved. I see you say: “But for the far larger community of psi deniers who have not read the literature of evidence for psi, and get all your information from the Shermers and Randis of the world, I have a simple message: you are uninformed.” So obviously you think that either Randi has bad information or is deliberately sharing bad information. That’s fine. If the Challenge is set up correctly it shouldn’t matter what Randi does or does not believe/know/whatever. I can only conclude there is at least one serious flaw in the Challenge. Could you tell me what it is?
Matthew: As far as I can tell, Psi is not a hypothesis that constrains the probability density of predictions rather than simply saying “anything goes, anything can happen”. As such, isn’t it just an instance of radical skepticism? The thing is, radical skeptical arguments don’t change anticipations or proscribe changes in behavior. Taken seriously, it’s not clear that such hypotheses even constitute arguments for their own advocacy. Maybe if I draw attention to the unknowable demons behind the curtain I will be better able to deal with them but maybe that will cause them to eat me. I don’t see how an expected value calculation holds that the former is more likely than the latter, just as I don’t see how a god who punishes atheists is any less likely than one who punishes believers. Related question. What evidence would cause you to relinquish the psi hypothesis?
You want me to believe precognition has been scientifically established? Give me one single research protocol which reliably (90% probability) produces results at the p < 0.01 significance level for events 30 minutes in the future.
If the effect is real, however small, there will exist some number of subjects/trials that reliably amplifies the effect to any given level of statistical significance.
Actually rather than rehashing the entire psi debate here, I’d much prefer you just read the material instead. Chapter 3 of Irreducible Mind is particularly powerful, and I will send excerpts to anyone who gives me a US postal address or PO box (email mcromer @t blast dawt com). The natural history of these phenomena are very easily available, and often very well documented.
Got protocol? Yes or no?
“Got protocol? Yes or no?”
If there was any actual evidence, somebody would have claimed Randi’s million-dollar prize years ago. I wasn’t able to find a copy of “The Irreducible Mind” online; it doesn’t have a Wikipedia article and apparently isn’t that popular. A quick Google of the authors reveals that only one (Bruce Greyson) has a Wikipedia article (http://en.wikipedia.org/wiki/Bruce_Greyson). The lead author, Edward F. Kelly, is employed as a professor of “Perceptual Studies” at the University of Virginia Health System (http://www.healthsystem.virginia.edu/internet/personalitystudies/Edbio.cfm) and has a PhD. from Harvard in “Psycholinguistics/Cognitive Science”. The authors seem to work mainly within the field of psychology, asserting that it has “no explanation” for the human mind (http://www.amazon.com/Irreducible-Mind-hard-find-contemporary/dp/customer-reviews/0742547922).
As for the other two links, the first one sounds like nonsense; the “research” was not peer-reviewed, replicated or verified and was “released exclusively to the Daily Mail”, a well-known London tabloid (http://en.wikipedia.org/wiki/Daily_Mail). The article he linked is from The Evening Standard, another British tabloid (http://en.wikipedia.org/wiki/The_Evening_Standard), and asserts that “Virtually all the great scientific formulae which explain how the world works allow information to flow backwards and forwards through time—they can work either way, regardless.”, as well as a great deal of other obvious nonsense. The second one lists a number of anecdotes, none of which have sources, identifying references or even names.
Information flowing both backward and forward through time is obviously useless to us since we perceive and move in only one direction. It’s not obviously nonsense. Our perception moves forward through time, so it seems obvious to us that cause leads to effect.
However… if, in fact, the effect precipitates the cause… Or some feedback combination of both… How would we actually be able to tell? Our perception only computes in one direction so we always see the cause half of it first and then the effect.
If there were people reliable enough at passing information back to their past selves to beat random chance though I expect they would already have found a way to make use of it.
“If the hypothesis is that the coin comes up heads 95% of the time, then one time in twenty you will see what looks like contrary evidence.”
My question here assumes that you mean one in twenty times you get a tails (if you mean one in twenty times you get a heads, then I’m also confused but for different reasons).
Surely if I have a hypothesis that a coin will land heads 95% of the time (and therefore tails 5% of the time) then every cluster of results in which 1⁄20 are tails is actually supporting evidence. If I toss a coin X times (where X is some number whereby 95% is a meaningful description of outcomes: X >= 20) and 1 out of those 20 is tails, that actually is solid evidence is support of my hypothesis—if, as you say “one in twenty times” I see a tails, that is very strong evidence that my 95% hypothesis is accurate...
Have I misread you point or am I thinking about this from the wrong angle?
Have I misread you point or am I thinking about this from the wrong angle?
Maybe the belief here is “the next flip of the coin will be heads”. Then each head causes your confidence in that belief to increase, while each tail causes a decrease in that confidence.
You’re right, though; the belief “the coin is heads 94-96% of the time” behaves according to more complicated rules. Even if it is true, every so often, you will still get evidence that contradicts your belief—such as a twenty tails in a row. But not often, and Elizer’s point still applies.
John, Stuart, let’s do the math:
H1: “the coin will come up heads 95% of the time.”
Whether a given coinflip is evidence for or against H1 depends not only on the value of that coinflip, but on what other hypotheses you are comparing H1 to. So let’s introduce...
H2: “the coin will come up heads 50% of the time.”
By Bayes’ Theorem (odds form), the odds conditional upon the data D are:
p(H1|D) / p(H2|D) = p(H1)p(D|H1) / p(H2)p(D|H2)
So when we see the data, our odds are multiplied by the likelihood ratio p(D|H1)/p(D|H2).
If D = heads, our likelihood ratio is:
p(heads|H1) / p(heads|H2) = .95 / .5 = 1.9.
If D = tails, our likelihood ratio is:
p(tails|H1) / p(tails|H2) = .05 / .5 = 0.1.
If you prefer to measure evidence in decibels, then a result of heads is 10log10(1.9) ~= +2.8db of evidence and a result of tails is 10log10(0.1) = −10.0db of evidence.
The same result is true regardless of how you group the coinflips; if you get nothing but heads, that is even stronger evidence for H1 than if you get 95% heads and 5% tails. This is true because we are only comparing it to hypothesis H2. If we introduce hypothesis H3:
H3: “the coin will come up heads 99% of the time.”
Then we can also measure the likelihood ratio p(D|H1) / p(D|H3).
Plugging in “heads” or “tails”, we get:
p(heads|H1) / p(heads|H3) = 0.95 / 0.99 = 0.9595… p(tails|H1) / p(tails|H3) = 0.05 / 0.01 = 5.0
So a result of heads is about −0.18 db of evidence for H1, and a result of tails is about +7.0 db of evidence.
If you have a uniform prior on [0, 1] for the frequency of a heads, then you can use Laplace’s Rule of Succession.
McCabe’s single-paragraph dismissal of an 800 page book with hundreds of footnotes that he hasn’t read, based on wikipedia entries seems to be the precise opposite of the raison d’être of Overcoming Bias. And Yudkowsky, I simply dare you to read this book. You talk the good talk here about The Way and the search for truth. I dare you to expose yourself to some of the meticulously-documented lacunae in your worldview by reading Irreducible Mind. I dare you to your sense of intellectual pride. Chapter 3 is a good place to start. . .
So there’s no reproducible protocol, then?
I have better things to do with my time.
Matthew C—it sounds more like you’re trying to sell a book than produce a testable experiment.
Here’s the thing.
I could a book and find that the arguments in the book are “valid”—that it is impossible, or at least unlikely, that the premises are true and the conclusion false. However, what I can’t do by reading is determine if the premises are true.
In the infamous Alien Autopsy “documentary”, there were three specific claims made for the authenticity of the video.
1) An expert from Kodak examined the film, and verified that it is as old as was claimed. 2) A pathologist was interviewed, who said that the autopsy portrayed was done in the manner that an actual autopsy would have been done. 3) An expert from Spielberg’s movie studio testified that modern special effects could not duplicate the scenes in the video.
If you accept these statements as true, it becomes reasonable to accept that the footage was actually showing what it appeared to show; an autopsy of dead aliens.
Upon seeing these claims, though, my response was along the lines of “I defy the data.” As it turns out, all three of those statements were blatant lies. There was no expert from Kodak who verified the film. Kodak offered to verify the film, but was denied access. Many other pathologists said that the way the autopsy was performed in the film was absurd, and that no competent pathologist would ever do an autopsy on an unknown organism in that manner because it would be completely useless. The person from Spielberg’s movie studio was selectively quoted and was very angry about it. What he really said that the film was good for whatever grade B studio happened to have produced it.
I could read your book, but I believe that it is more likely that the statements in the book are wrong than it is that psi exists. As Thomas Jefferson did not say, “It is easier to believe that two Yankee professors [Profs. Silliman and Kingsley of Yale] would lie than that stones would fall from the sky.”
The burden of proof is on you, Matthew. Many, many claims of the existence of “psi” have been shown to be bogus, so I give further claims of that nature very little credence. Either tell us about a repeatable experiment—copy a few paragraphs from that book if you have to—or we’re going to ignore you.
Although I also think Psi is bogus, my belief has nothing to do with the fact that previous claims of psi have been bogus. Evidence can never justify a theory, any more than finding 10 white swans in a row proves that there are no black swans! Believing that psi is false because of evidence that psi has been false in the past is the logical fallacy of inductivism. Most rational people do not believe in Psi because it has no logical theoretical/scientific basis and because it does not explain things well.
Much of this type of argument strikes me as nonsense. Something that is true can not be justified. One can (and should) argue that something is true. But argument is not justification. If the argument explains something well, then one should believe it, if it is the best theory available.
But evidence can never support any argument. It merely corroborates it. The reason that you believe a coin is fair is not ultimately because the results of an experiment convince you. It would be easy to set up an algorithm that causes the first 3000 examples of a computer simulated coin-flip to have the correct number of heads or tails to make the uninformed believe that the simulated coin flip is fair. But the next 10,000 could yield very different results, just by using an easy-to-create mathematical algorithm. No p-value can be assigned even after 3000 computer simulations of a coin flip. The data never tell a story (to quote someone on another site).
The reason we rationally believe the results of experiment when we flip the coin, but not when we see an apparent computer simulation of a coin flip is: In the case of the actual coin we already have explanations of the effects of gravity on two-sided metal objects, well before we have any data about coin flips. The same is not true about the computer simulation of the coin flip, unless we see the program ahead of time.
It is the theory about the effects of gravity on two-sided metal objects (with a particular pattern of metal distribution) that we try to evaluate when we flip coins. The data never tell us a story about whether the coin is fair. We first have a theory about the coin and its properties and then we utilize the experiment (the coin flip) to try to falsify our notion that the coin is fair if the coin looks balanced. Or, we falsify the notion that the coin is not fair, if our initial theory is that the coin does not look balanced. Examples of a phenomena do not increase the probability of it being true.
The reason we may believe that a coin could be fair is that we first evaluate the structure of the material, note that it seems to have a structure that would promote fairness given standard human flips of coins. Only then do we test it. But it is our rational understanding of the properties of the coin and expectations about the environment which make the coin flip reasonable. The results of any test tell you nothing (logically, nothing at all) about the fairness of a coin unless you first have a theory and an explanation about why the coin should or should not be considered fair.
The reason we do not believe in psi is that it does not explain anything, violates multiple known laws of physics, yet creates no alternative scientific structure that allows us to understand and predict events in our world.
This is pretty muddled and wrong. You use a lot of terms in an unorthodox way. For example I don’t know how something that is true cannot ever be justified (how else do you know it’s true!). Also, there is no such thing as science without induction, no laws of physics or predictions. So I’m pretty confused about what your position is. That’s okay though because it looks like you’ve never heard of Bayesian inference. In which case this is a really important day in your life.
The wikipedia enty
The SEP entry
Eliezer’s explanation of the Math
Also: the “Rationality and Science” subsection at the bottom here.
Who has better links?
Edit: Welcome to less wrong, btw! Feel free to introduce yourself.
Edit again: This PDF looks good.
I wouldn’t call Popperianism unorthodox exactly.
I sort of see some Popper in the comment but I also see a good deal that isn’t.
“For example I don’t know how something that is true cannot ever be justified (how else do you know it’s true!”
You can’t know that something is true. We are fallible. And our best theories are often wrong. We gain knowledge by arguing with each other and trying to point out logical contradictions in our explanations. Experiments can help us to show that competing explanations are wrong (or that ours is!) .
Induction as a scientific methodology has been known (since Hume) to be impossible. Happy to discuss this further if you like. I will certainly read the articles you suggest. Please consider reading David Deutsch’s, The Fabric of Realtiy. He (better than Hume in my estimation) shows the ’complete irrationality of induction, but I am happy to discuss, if you are interested.
I agree with Hume about just about everything. You’re misreading him. Induction definitely isn’t impossible. We do it all the time. Scientists do it for a living. Hume certainly didn’t think it was impossible. What he thought was that there was no deductive reason for expecting that today will be like yesterday. They only justification is induction itself. Thus, any inductive argument begs the question. But his solution definitely wasn’t to throw it out and wallow in extreme skepticism. He thought induction was inevitable (not even something we will, just part of psychological habit formation) and was pretty much the only way of having knowledge about anything.
Hume’s position is basically my position. Though I have some sketchy arguments in my head that might let us go farther than Hume, I’m more than comfortable with that. Now it turns out that if your psychological habit formation occurs in a certain way (the Bayesian way) you’ll start winning bets against those who form beliefs in different ways. It also lets us do statistical/probabilistic experimentation which would never falsify anything but can provide evidence for and against theories. It also explains why we like unfalsified theories that have been tested many, many times more than unfalsified theories that have rarely been tested.
If Deutsch has other arguments you can spell out here I’d be happy to hear them.
This is true if you take “know” to mean “absolute certainty”. And, precisely because absolute certainty never happens, taking “know” in this sense would be pointless. We would never have the opportunity to use such a word, so why bother having it? For that reason, people on this site take the assertion that they “know” a proposition P to mean that the evidence they’ve gathered adds up to a sufficiently high probability for P. Here,
“sufficiently high” depends on the context — for example, the expected cost/benefit of acting as though P is true; and
the evidence that they’ve gathered “adds” in the sense of Bayesian updating.
That’s all that they mean by “know”.
On the Bayesian interpretation, induction is just a certain mathematical computation. The only limits on its possibility are the limits on your ability to carry out the computations.
“evidence they’ve gathered adds up to a sufficiently high probability for P”
Perhaps I should ask what you mean by “evidence”? By evidence do you mean examples of an event happening that corroborates a particular theory that someone holds ?
So if
you have an expectation of something happening, and
that something happens,
then you are saying that the event is evidence in favor of the theory. And if the event happens even more when you expect it to then
it is even more evidence for the theory, and this increased probability is calculated by using a Bayesian rule to update your increased expectation of the likelihood of the truth of your theory?
Have I stated your argument correctly?
All input that you have access to is potentially evidence. That is, ideally, all your input would figure into your evaluation of the probability of any proposition whatsoever. And if some input E weren’t evidence with respect to some particular proposition H, you would still have to run the Bayesian updating computation to determine that E didn’t change the probability that you ought to assign to H.
Obviously, in practice, computing the upshot of all your input is so ideal as to be physically impossible. But, in principle, everything is evidence.
Contradicting prior expectation is a particularly potent kind of evidence. But it is only a special case. Search for “Popper” at Eliezer’s An Intuitive Explanation of Bayes’ Theorem.
“And if the event happens even more when you expect it to then
it is even more evidence for the theory, ”
I am not sure you agreed with this based on your response but I will assume that you did. But correct me if I am wrong!
If you did agree, then consider the Bayesian turkey. Every time he gets fed in November, he concludes that his owner really wants what’s best for him and likes him, because he enjoys eating and keeps getting food. Every day more food is provided, exactly as he expects given his theory, so he uses Bayesian statistical inference to increase the confidence he has in his theory about the beneficence of his master. As more food is provided, exactly according to his expectations, he concludes that his theory is becoming more and more likely to be true. Towards the end of November, he considers his theory very true indeed.
You can guess the rest of the story. Turkeys are eaten at Thanksgiving. The turkey was killed.
I think you can see that probabilistic evidence, or any evidence, does not (can not) logically support a theory. It merely corroborates it. One can not infer from an example of something, a general rule. Exactly the opposite is the case. One cannot infer that because food is provided each day, that it will continue to be provided each day. Examples of food being provided do not increase the likelihood that the theory is true. But good theories about the world (people like to eat turkeys on Thanksgiving) helps one develop expected probabilities of events. If the turkey had a good theory, he would rationally expect certain probabilities. For example he would predict that he would be given food up until Nov. 25th, but not after.
I can summarize like this. Outcomes of probabilistic experiments do not tell us what it is rational to believe, any more than the turkey was justified in believing in the beneficence of his owner because he kept getting food in November. Probability does not help us develop rational expectations. Rational expectations, on the other hand, do help us to determine what is probable. When the turkey has a rational theory, he can determine the likelihood that he will or will not be given food on a given day.
A perfect Bayesian turkey would produce multiple hypotheses to explain why he is being fed. One hypothesis would be that his owner loves him, another would be that he is being fattened for eating. Let us stipulate that those are the only possibilities. When the turkey continues to be fed that is new data. But that data doesn’t favor one hypothesis over the other. Both hypotheses are about equally consistent with the turkey continuing to be fed so little updating will occur in either direction.
But this gives the game away. What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something). If the turkey had this information it isn’t even close. The probability distribution immediately shifts drastically in favor of the Thanksgiving meal hypothesis.
Then, if Thanksgiving comes and goes and the turkey is still being fed he can update on that information and the probability his owner loves him goes up again.
“What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something).”
I do appreciate your honesty in making this assumption. Usually inductivists are less candid (but believe exactly as you do, secretly. We call them crypto-inductivists!)
But there is no law of physics, psychology, economics, or philosophy that says that the future must resemble the past. There also is no law of mathematics or logic that says that when a sequence of 100 zeroes in a row are observed, the next one is more likely to be another zero. Indeed there are a literal INFINITE number of hypotheses that are consistent with 100 zero’s coming first and then anything else coming next.
With respect, the reason you believe that Thanksgiving will keep coming has everything to do with your a-priori theory about culture and nothing to do with inductivism. You and I probably have rich theories that cultures can be slow to change, that brains may be hard-wired and difficult to change, that memes reinforce each other, etc. That is why we think Thanksgiving will come again. It is your understanding of our culture that allows you to make predictions about Thanksgiving, not the fact that it has happened for! For example, you didn’t keep writing the year 19XX, just because most of your life you did so and did so repeatedly. You were not fooled by an imaginary principle of induction when the calendar turned from 1999 to 2000. You did not keep writing 19...something, just because you had written it before. You understood the calendar, just as you understand our culture and have deep theories about it. That is why you make certain predictions (Thankgiving will keep coming but you won’t continue to write 19XX, no matter how many times you wrote it in the past.
I think you can see that your rationality,( not a principle of induction, not that everything stays the same) is actually what caused you to have rational expectations to begin with.
Of course not. Though I’m pretty sure induction occurs in humans without them willing it. This is just Hume’s view, certain perceptions become habitual to the point where we are surprised if we do not experience them, We have no choice but to do induction. But none of this matters. Induction is just what we’re doing when we do science. If we can’t trust it we can’t trust science
I’m sorry, my “a priori” theory? In what sense could I possibly know about Thanksgiving a priori? It certainly isn’t an analytic truth and it isn’t anything like math or something Kant would have considered a priori. Where exactly are these theories coming from if not from induction? And how come inductivists aren’t allowed to have theories? I have lots of theories- probably close to the same theories you do. The only difference between our positions is that I’m explaining how those theories got here in the first place.
I’m afraid I don’t know what to make of your calendar and number examples. Just because I think science is about induction doesn’t mean I don’t think that social conventions can be learned. Someone explaining math, that after 1999 comes 2000 counts as pretty good Bayesian evidence that that is how the rest of the world counts. Of course most children aren’t great Bayesians and just accept what they are told as true. But the fact that people aren’t actually naturally perfect scientists isn’t relevant.
Rationality is just the process of doing induction right. You have to explain what you mean if you mean something else by it :-) (And obviously induction does not mean everything stays the same but that there are enough regularities to say general things about the world and make predictions. This is crucial. If there were no regularities the notion of a “theory” wouldn’t even make sense. There would be nothing for the theory to describe. Theories explain large class of phenomena over many times. They can’t do that absent regularities.)
There is more discussion of this post here as part of the Rerunning the Sequences series.