I don’t see what’s wrong with the idea that “extraordinary claims require extraordinary evidence”.
Me neither, but quite a few people on lesswrong don’t seem to share that opinion or are in possession of vast amounts of evidence that I lack. For example, some people seem to consider “interference from an alternative Everett branch in which a singularity went badly” or “unfriendly AI that might achieve complete control over our branch by means of acausal trade”. Fascinating topics for sure, but in my opinion ridiculously far detached from reality to be taken at all seriously.
I think you only get significant interference from “adjacent” worlds—but sure, this sounds a little strange, the way you put it.
If we go back to the Pascal’s wager post though—Eliezer Yudkowsky just seems to be saying that he doesn’t know how to build a resouce-limited version of Solomonoff induction that doesn’t make the mistake he mentions. That’s fair enough—nobody knows how to build high quality approximations of Solomonoff induction—or we would be done by now. The point is that this isn’t a problem with Solomonoff induction, or with the idea of approximating it. It’s just a limitation in Eliezer Yudkowsky’s current knowledge (and probably everyone else’s). I fully expect that we will solve the problem, though. Quite possibly to do so, we will have to approximate Solomonoff induction in the context of some kind of reward system or utility function—so that we know which mis-predictions are costly (e.g. by resulting in getting mugged) - which will guide us to the best points to apply our limited resources.
If we go back to the Pascal’s wager post though—Eliezer Yudkowsky just seems to be saying that he doesn’t know how to build a resouce-limited version of Solomonoff induction that doesn’t make the mistake he mentions.
It has nothing to do with recourse limitations, the problem is that Solomonoff induction itself can’t handle Pascal’s mugging. If anything, the resource limited version of Solomonoff induction is less likely to fall for Pascal’s mugging since it might round the small probability down to 0.
It has nothing to do with recourse limitations, the problem is that Solomonoff induction itself can’t handle Pascal’s mugging.
In what way? You think that Solomonoff induction would predict enormous torture with a non-negligible propbability if it observed the mugger not being paid? Why do you think that? That conclusion seems extremely unlikely to me—assumung that the Solomonoff induction had had a reasonable amount of previous exposure of the world. It would, like any sensible agent, assume that the mugger was lying.
That’s why the original Pascal’s mugging post post directed its criticism at “some bounded analogue of Solomonoff induction”.
In what way? You think that Solomonoff induction would predict enormous torture with a non-negligible propbability if it observed the mugger not being paid?
Because Solomonoff induction bases its priors on minimum message length and it’s possible to encode enormous numbers like 3^^^3 in a message of length much less then 3^^^3.
Why do you think that?
Because I understand mathematics. ;)
That’s why the original Pascal’s mugging post post directed its criticism at “some bounded analogue of Solomonoff induction”.
What Eliezer was referring to is the fact that an unbounded agent would attempt to incorporate all possible versions of Pascal’s wager and Pascal’s mugging simultaneously and promptly end up with an ∞ − ∞ error.
You think that Solomonoff induction would predict enormous torture with a non-negligible propbability if it observed the mugger not being paid?
Because Solomonoff induction bases its priors on minimum message length and it’s possible to encode enormous numbers like 3^^^3 in a message of length much less then 3^^^3.
Sure—but the claim there are large numbers of people waiting to be tortured also decreases in probability with the number of people involved.
I figure that Solomonoff induction would give a (correct) tiny probability for this hypothesis being correct.
Your problem is actually not with Solomonoff induction—despite what you say—I figure. Rather you are complaining about some decision theory application of Solomonoff induction—involving the concept of “utility”.
Because Solomonoff induction bases its priors on minimum message length and it’s possible to encode enormous numbers like 3^^^3 in a message of length much less then 3^^^3.
Sure—but the claim there are large numbers of people waiting to be tortured also decreases in probability with the number of people involved.
What does this have to do with my point.
I figure that Solomonoff induction would give a (correct) tiny probability for this hypothesis being correct.
It does, just not tiny enough to override the 3^^^3 utility difference.
Your problem is actually not with Solomonoff induction—despite what you say—I figure. Rather you are complaining about some decision theory application of Solomonoff induction—involving the concept of “utility”.
I don’t have a problem with anything, I’m just trying to correct misconceptions about Pascal’s mugging.
I’m just trying to correct misconceptions about Pascal’s mugging.
Well, your claim was that “Solomonoff induction itself can’t handle Pascal’s mugging”—which appears to be unsubstantiated nonsense. Solomonoff induction will give the correct answer based on Occamian priors and its past experience—which is the best that anyone could reasonably expect from it.
I think you only get significant interference from “adjacent” worlds—but sure, this sounds a little strange, the way you put it.
If we go back to the Pascal’s wager post though—Eliezer Yudkowsky just seems to be saying that he doesn’t know how to build a resouce-limited version of Solomonoff induction that doesn’t make the mistake he mentions. That’s fair enough—nobody knows how to build high quality approximations of Solomonoff induction—or we would be done by now. The point is that this isn’t a problem with Solomonoff induction, or with the idea of approximating it. It’s just a limitation in Eliezer Yudkowsky’s current knowledge (and probably everyone else’s). I fully expect that we will solve the problem, though. Quite possibly to do so, we will have to approximate Solomonoff induction in the context of some kind of reward system or utility function—so that we know which mis-predictions are costly (e.g. by resulting in getting mugged) - which will guide us to the best points to apply our limited resources.
It has nothing to do with recourse limitations, the problem is that Solomonoff induction itself can’t handle Pascal’s mugging. If anything, the resource limited version of Solomonoff induction is less likely to fall for Pascal’s mugging since it might round the small probability down to 0.
In what way? You think that Solomonoff induction would predict enormous torture with a non-negligible propbability if it observed the mugger not being paid? Why do you think that? That conclusion seems extremely unlikely to me—assumung that the Solomonoff induction had had a reasonable amount of previous exposure of the world. It would, like any sensible agent, assume that the mugger was lying.
That’s why the original Pascal’s mugging post post directed its criticism at “some bounded analogue of Solomonoff induction”.
Because Solomonoff induction bases its priors on minimum message length and it’s possible to encode enormous numbers like 3^^^3 in a message of length much less then 3^^^3.
Because I understand mathematics. ;)
What Eliezer was referring to is the fact that an unbounded agent would attempt to incorporate all possible versions of Pascal’s wager and Pascal’s mugging simultaneously and promptly end up with an ∞ − ∞ error.
Sure—but the claim there are large numbers of people waiting to be tortured also decreases in probability with the number of people involved.
I figure that Solomonoff induction would give a (correct) tiny probability for this hypothesis being correct.
Your problem is actually not with Solomonoff induction—despite what you say—I figure. Rather you are complaining about some decision theory application of Solomonoff induction—involving the concept of “utility”.
What does this have to do with my point.
It does, just not tiny enough to override the 3^^^3 utility difference.
I don’t have a problem with anything, I’m just trying to correct misconceptions about Pascal’s mugging.
Well, your claim was that “Solomonoff induction itself can’t handle Pascal’s mugging”—which appears to be unsubstantiated nonsense. Solomonoff induction will give the correct answer based on Occamian priors and its past experience—which is the best that anyone could reasonably expect from it.