If a poorly-dressed street person offers to save 10(10^100) lives (googolplex lives) for $5 using their Matrix Lord powers, and you claim to assign this scenario less than 10-(10^100) probability, then apparently you should continue to believe absolutely that their offer is bogus even after they snap their fingers and cause a giant silhouette of themselves to appear in the sky.
I don’t see why this is necessarily the case. What am I missing here?
Here is a Summary of what I understandso far
A “correct” epistemology would satisfy our intuition that we should ignore the Pascal’s Mugger who doesn’t show any evidence, and pay the Matrix Lord, who snaps his fingers and shows his power.
The problem is that no matter how low a probability we assign to the mugger telling the truth, the mugger can name an arbitrarily large number of people to save, and thus make it worth it to pay him anyway. If we weigh the mugger’s claim at infinitesimally small, however, we won’t be sufficiently convinced by the Matrix Lord’s evidence.
The matter is further complicated by the fact that the number of people Matrix Lord claims to save suggests a universe which is so complex that it gets a major complexity penalty.
Here is my Attempt at solution
Here is the set of all possible universes
Each possible universe has a probability. They all add up to one. Since there are infinite possible universes, many of these universes have infinitesimally low probability. Bayes theorem adjusts the probability of each.
The Matrix Lord / person turning into a cat scenario is such that a universe which previously had an infinitesimally low probability now has a rather large likelihood.
What happens when a person turns into a cat?
All of the most likely hypothesis are suddenly eliminated, and everything changes.
Working through some examples to demonstrate that this is a solution
You have models U1, U2, U3...and so on. P(Un) is the probability that you live in Universe n.
Your current priors:
P(U1) = 60%
P(U2) = 30%
P(U3) = epsilon
P(U4) = delta
...and so on.
Mr. Matrix turns into a cat or something. Now our hypothesis space is as follows:
P(U1) = 0
P(U2) = 0
P(U3) = 5% (previously Epsilon)
P(U4) = delta
In essence, the utter elimination of all the remotely likely hypothesis suddenly makes several universes which were previously epsilon/delta/arbitrarily small in probability much more convincing.
Basically, if the scenario with the Time Lord happened to us, we aught to act in approximately the same way that the idealized “rational agent” would act if it were given no information whatsoever (so all prior probabilities are assigned using complexity alone), and then a voice from the sky suddenly specifies a hypothesis of arbitrarily high complexity from the space of possible universes and claims that it is true.
Come to think of it, you might even think of your current memories as playing the role of the “voice from the sky”. There is no meta-prior saying you should trust your memories, but you have nothing else. Similarly, when Mr. Matrix turned into a cat, he eliminated all your non-extremely-unlikely hypotheses, so you have nothing to go on but his word.
But to conclude something whose prior probability is on the order of one over googolplex, I need on the order of a googol bits of evidence, and you can’t present me with a sensory experience containing a googol bits. Indeed, you can’t ever present a mortal like me with evidence that has a likelihood ratio of a googolplex to one—evidence I’m a googolplex times more likely to encounter if the hypothesis is true, than if it’s false—because the chance of all my neurons spontaneously rearranging themselves to fake the same evidence would always be higher than one over googolplex. You know the old saying about how once you assign something probability one, or probability zero, you can never change your mind regardless of what evidence you see? Well, odds of a googolplex to one, or one to a googolplex, work pretty much the same way.
But to conclude something whose prior probability is on the order of one over googolplex, I need on the order of a googol bits of evidence, and you can’t present me with a sensory experience containing a googol bits.
Huh? You don’t need to conclude anything whose prior probability was “on the order of one over googolplex.”
You just need to believe it enough that it out-competes the suggested actions of any of the other hypotheses...and nearly all the hypothesis which had, prior to the miraculous event, non-negligible likelihood just got falsified, so there is very little competition...
Even if the probability of the Matrix lord telling the truth is 1%, you’re still going to give him the five dollars, because there are infinite ways in which he could lie.
In fact, even if the universes in which the Matrix Lord is lying are all simpler than the one in which he is telling the truth, the actions proposed by the various kinds of lie-universes cancel each other out. (In one lie-universe, he actually saves only one person, in another equally likely lie-verse, he actually kills one person, and so on)
When a rational agent makes the decision, it calculates the expected value of the intended action over every possible universe, weighted by probability.
By analogy:
If I tell you I’m going to pick a random natural number, and I additionally tell you that there is a 1% chance that I pick “42”, and ask you to make a bet about which number comes up. You are going to bet “42″, because the chance that I pick any other number is arbitrarily small...you can even try giving larger numbers a complexity penalty, it won’t change the problem. Any evidence for any number that brings it up above “arbitrarily small” will do.
the chance of all my neurons spontaneously rearranging themselves to fake the same evidence would always be higher than one over googolplex.
Analogy still holds. Just pretend that there is a 99% chance that you misheard me when I said “42”, and I might have said any other number. You still end up betting on 42.
TL:DR
I don’t see why this is necessarily the case. What am I missing here?
Here is a Summary of what I understandso far
A “correct” epistemology would satisfy our intuition that we should ignore the Pascal’s Mugger who doesn’t show any evidence, and pay the Matrix Lord, who snaps his fingers and shows his power.
The problem is that no matter how low a probability we assign to the mugger telling the truth, the mugger can name an arbitrarily large number of people to save, and thus make it worth it to pay him anyway. If we weigh the mugger’s claim at infinitesimally small, however, we won’t be sufficiently convinced by the Matrix Lord’s evidence.
The matter is further complicated by the fact that the number of people Matrix Lord claims to save suggests a universe which is so complex that it gets a major complexity penalty.
Here is my Attempt at solution
Here is the set of all possible universes
Each possible universe has a probability. They all add up to one. Since there are infinite possible universes, many of these universes have infinitesimally low probability. Bayes theorem adjusts the probability of each.
The Matrix Lord / person turning into a cat scenario is such that a universe which previously had an infinitesimally low probability now has a rather large likelihood.
What happens when a person turns into a cat?
All of the most likely hypothesis are suddenly eliminated, and everything changes.
Working through some examples to demonstrate that this is a solution
You have models U1, U2, U3...and so on. P(Un) is the probability that you live in Universe n. Your current priors:
P(U1) = 60%
P(U2) = 30%
P(U3) = epsilon
P(U4) = delta
...and so on.
Mr. Matrix turns into a cat or something. Now our hypothesis space is as follows:
P(U1) = 0
P(U2) = 0
P(U3) = 5% (previously Epsilon)
P(U4) = delta
In essence, the utter elimination of all the remotely likely hypothesis suddenly makes several universes which were previously epsilon/delta/arbitrarily small in probability much more convincing.
Basically, if the scenario with the Time Lord happened to us, we aught to act in approximately the same way that the idealized “rational agent” would act if it were given no information whatsoever (so all prior probabilities are assigned using complexity alone), and then a voice from the sky suddenly specifies a hypothesis of arbitrarily high complexity from the space of possible universes and claims that it is true.
Come to think of it, you might even think of your current memories as playing the role of the “voice from the sky”. There is no meta-prior saying you should trust your memories, but you have nothing else. Similarly, when Mr. Matrix turned into a cat, he eliminated all your non-extremely-unlikely hypotheses, so you have nothing to go on but his word.
Eliezer:
Huh? You don’t need to conclude anything whose prior probability was “on the order of one over googolplex.”
You just need to believe it enough that it out-competes the suggested actions of any of the other hypotheses...and nearly all the hypothesis which had, prior to the miraculous event, non-negligible likelihood just got falsified, so there is very little competition...
Even if the probability of the Matrix lord telling the truth is 1%, you’re still going to give him the five dollars, because there are infinite ways in which he could lie.
In fact, even if the universes in which the Matrix Lord is lying are all simpler than the one in which he is telling the truth, the actions proposed by the various kinds of lie-universes cancel each other out. (In one lie-universe, he actually saves only one person, in another equally likely lie-verse, he actually kills one person, and so on)
When a rational agent makes the decision, it calculates the expected value of the intended action over every possible universe, weighted by probability.
By analogy:
If I tell you I’m going to pick a random natural number, and I additionally tell you that there is a 1% chance that I pick “42”, and ask you to make a bet about which number comes up. You are going to bet “42″, because the chance that I pick any other number is arbitrarily small...you can even try giving larger numbers a complexity penalty, it won’t change the problem. Any evidence for any number that brings it up above “arbitrarily small” will do.
Analogy still holds. Just pretend that there is a 99% chance that you misheard me when I said “42”, and I might have said any other number. You still end up betting on 42.