If we go back to the Pascal’s wager post though—Eliezer Yudkowsky just seems to be saying that he doesn’t know how to build a resouce-limited version of Solomonoff induction that doesn’t make the mistake he mentions.
It has nothing to do with recourse limitations, the problem is that Solomonoff induction itself can’t handle Pascal’s mugging. If anything, the resource limited version of Solomonoff induction is less likely to fall for Pascal’s mugging since it might round the small probability down to 0.
It has nothing to do with recourse limitations, the problem is that Solomonoff induction itself can’t handle Pascal’s mugging.
In what way? You think that Solomonoff induction would predict enormous torture with a non-negligible propbability if it observed the mugger not being paid? Why do you think that? That conclusion seems extremely unlikely to me—assumung that the Solomonoff induction had had a reasonable amount of previous exposure of the world. It would, like any sensible agent, assume that the mugger was lying.
That’s why the original Pascal’s mugging post post directed its criticism at “some bounded analogue of Solomonoff induction”.
In what way? You think that Solomonoff induction would predict enormous torture with a non-negligible propbability if it observed the mugger not being paid?
Because Solomonoff induction bases its priors on minimum message length and it’s possible to encode enormous numbers like 3^^^3 in a message of length much less then 3^^^3.
Why do you think that?
Because I understand mathematics. ;)
That’s why the original Pascal’s mugging post post directed its criticism at “some bounded analogue of Solomonoff induction”.
What Eliezer was referring to is the fact that an unbounded agent would attempt to incorporate all possible versions of Pascal’s wager and Pascal’s mugging simultaneously and promptly end up with an ∞ − ∞ error.
You think that Solomonoff induction would predict enormous torture with a non-negligible propbability if it observed the mugger not being paid?
Because Solomonoff induction bases its priors on minimum message length and it’s possible to encode enormous numbers like 3^^^3 in a message of length much less then 3^^^3.
Sure—but the claim there are large numbers of people waiting to be tortured also decreases in probability with the number of people involved.
I figure that Solomonoff induction would give a (correct) tiny probability for this hypothesis being correct.
Your problem is actually not with Solomonoff induction—despite what you say—I figure. Rather you are complaining about some decision theory application of Solomonoff induction—involving the concept of “utility”.
Because Solomonoff induction bases its priors on minimum message length and it’s possible to encode enormous numbers like 3^^^3 in a message of length much less then 3^^^3.
Sure—but the claim there are large numbers of people waiting to be tortured also decreases in probability with the number of people involved.
What does this have to do with my point.
I figure that Solomonoff induction would give a (correct) tiny probability for this hypothesis being correct.
It does, just not tiny enough to override the 3^^^3 utility difference.
Your problem is actually not with Solomonoff induction—despite what you say—I figure. Rather you are complaining about some decision theory application of Solomonoff induction—involving the concept of “utility”.
I don’t have a problem with anything, I’m just trying to correct misconceptions about Pascal’s mugging.
I’m just trying to correct misconceptions about Pascal’s mugging.
Well, your claim was that “Solomonoff induction itself can’t handle Pascal’s mugging”—which appears to be unsubstantiated nonsense. Solomonoff induction will give the correct answer based on Occamian priors and its past experience—which is the best that anyone could reasonably expect from it.
It has nothing to do with recourse limitations, the problem is that Solomonoff induction itself can’t handle Pascal’s mugging. If anything, the resource limited version of Solomonoff induction is less likely to fall for Pascal’s mugging since it might round the small probability down to 0.
In what way? You think that Solomonoff induction would predict enormous torture with a non-negligible propbability if it observed the mugger not being paid? Why do you think that? That conclusion seems extremely unlikely to me—assumung that the Solomonoff induction had had a reasonable amount of previous exposure of the world. It would, like any sensible agent, assume that the mugger was lying.
That’s why the original Pascal’s mugging post post directed its criticism at “some bounded analogue of Solomonoff induction”.
Because Solomonoff induction bases its priors on minimum message length and it’s possible to encode enormous numbers like 3^^^3 in a message of length much less then 3^^^3.
Because I understand mathematics. ;)
What Eliezer was referring to is the fact that an unbounded agent would attempt to incorporate all possible versions of Pascal’s wager and Pascal’s mugging simultaneously and promptly end up with an ∞ − ∞ error.
Sure—but the claim there are large numbers of people waiting to be tortured also decreases in probability with the number of people involved.
I figure that Solomonoff induction would give a (correct) tiny probability for this hypothesis being correct.
Your problem is actually not with Solomonoff induction—despite what you say—I figure. Rather you are complaining about some decision theory application of Solomonoff induction—involving the concept of “utility”.
What does this have to do with my point.
It does, just not tiny enough to override the 3^^^3 utility difference.
I don’t have a problem with anything, I’m just trying to correct misconceptions about Pascal’s mugging.
Well, your claim was that “Solomonoff induction itself can’t handle Pascal’s mugging”—which appears to be unsubstantiated nonsense. Solomonoff induction will give the correct answer based on Occamian priors and its past experience—which is the best that anyone could reasonably expect from it.