But to conclude something whose prior probability is on the order of one over googolplex, I need on the order of a googol bits of evidence, and you can’t present me with a sensory experience containing a googol bits. Indeed, you can’t ever present a mortal like me with evidence that has a likelihood ratio of a googolplex to one—evidence I’m a googolplex times more likely to encounter if the hypothesis is true, than if it’s false—because the chance of all my neurons spontaneously rearranging themselves to fake the same evidence would always be higher than one over googolplex. You know the old saying about how once you assign something probability one, or probability zero, you can never change your mind regardless of what evidence you see? Well, odds of a googolplex to one, or one to a googolplex, work pretty much the same way.
But to conclude something whose prior probability is on the order of one over googolplex, I need on the order of a googol bits of evidence, and you can’t present me with a sensory experience containing a googol bits.
Huh? You don’t need to conclude anything whose prior probability was “on the order of one over googolplex.”
You just need to believe it enough that it out-competes the suggested actions of any of the other hypotheses...and nearly all the hypothesis which had, prior to the miraculous event, non-negligible likelihood just got falsified, so there is very little competition...
Even if the probability of the Matrix lord telling the truth is 1%, you’re still going to give him the five dollars, because there are infinite ways in which he could lie.
In fact, even if the universes in which the Matrix Lord is lying are all simpler than the one in which he is telling the truth, the actions proposed by the various kinds of lie-universes cancel each other out. (In one lie-universe, he actually saves only one person, in another equally likely lie-verse, he actually kills one person, and so on)
When a rational agent makes the decision, it calculates the expected value of the intended action over every possible universe, weighted by probability.
By analogy:
If I tell you I’m going to pick a random natural number, and I additionally tell you that there is a 1% chance that I pick “42”, and ask you to make a bet about which number comes up. You are going to bet “42″, because the chance that I pick any other number is arbitrarily small...you can even try giving larger numbers a complexity penalty, it won’t change the problem. Any evidence for any number that brings it up above “arbitrarily small” will do.
the chance of all my neurons spontaneously rearranging themselves to fake the same evidence would always be higher than one over googolplex.
Analogy still holds. Just pretend that there is a 99% chance that you misheard me when I said “42”, and I might have said any other number. You still end up betting on 42.
Eliezer:
Huh? You don’t need to conclude anything whose prior probability was “on the order of one over googolplex.”
You just need to believe it enough that it out-competes the suggested actions of any of the other hypotheses...and nearly all the hypothesis which had, prior to the miraculous event, non-negligible likelihood just got falsified, so there is very little competition...
Even if the probability of the Matrix lord telling the truth is 1%, you’re still going to give him the five dollars, because there are infinite ways in which he could lie.
In fact, even if the universes in which the Matrix Lord is lying are all simpler than the one in which he is telling the truth, the actions proposed by the various kinds of lie-universes cancel each other out. (In one lie-universe, he actually saves only one person, in another equally likely lie-verse, he actually kills one person, and so on)
When a rational agent makes the decision, it calculates the expected value of the intended action over every possible universe, weighted by probability.
By analogy:
If I tell you I’m going to pick a random natural number, and I additionally tell you that there is a 1% chance that I pick “42”, and ask you to make a bet about which number comes up. You are going to bet “42″, because the chance that I pick any other number is arbitrarily small...you can even try giving larger numbers a complexity penalty, it won’t change the problem. Any evidence for any number that brings it up above “arbitrarily small” will do.
Analogy still holds. Just pretend that there is a 99% chance that you misheard me when I said “42”, and I might have said any other number. You still end up betting on 42.