Say someone offers to create 10^100 happy lives in exchange for something, and you assign them a 0.000000000000000000001 probability to them being capable and willing to carry through their promise. Naively, this has an overwhelmingly positive expected value.
If the stated probability is what you really assign then yes, positive expected value.
I see the key flaw in that the more exceptional the promise is, the lower the probability you must assign to it.
Would you give more credibility to someone offering you 10^2 US$ or 10^7 US$?
I see the key flaw in that the more exceptional the promise is, the lower the probability you must assign to it.
According to common LessWrong ideas, lowering the probability based on the exceptionality of the promise would mean lowering it based on the Kolomogorov complexity of the promise.
If you do that, you won’t lower the probability enough to defeat the mugging.
If you can lower the probability more than that, of course you can defeat the mugging.
If you can lower the probability more than that, of course you can defeat the mugging.
And one of the key problems with lowering it more is that it becomes really really hard to update when you get evidence that the mugging is real.
If you do that, you won’t lower the probability enough to defeat the mugging.
If you do that, your decision system just breaks down, since the expectation over arbitrary integers with probabilities computer by Solomonoff induction is undefined. That’s the reason why AIXI uses bounded rewards.
If the stated probability is what you really assign then yes, positive expected value.
I see the key flaw in that the more exceptional the promise is, the lower the probability you must assign to it.
Would you give more credibility to someone offering you 10^2 US$ or 10^7 US$?
According to common LessWrong ideas, lowering the probability based on the exceptionality of the promise would mean lowering it based on the Kolomogorov complexity of the promise.
If you do that, you won’t lower the probability enough to defeat the mugging.
If you can lower the probability more than that, of course you can defeat the mugging.
If you do that, your decision system just breaks down, since the expectation over arbitrary integers with probabilities computer by Solomonoff induction is undefined. That’s the reason why AIXI uses bounded rewards.