Yes, this is what I was trying to say. I see how the phrase “conditionality of the reward on your assessed probability” could describe Pascal’s Wager, but not how it could describe Pascal’s Mugging.
More concisely than the original/gwern: The algorithm used by the mugger is roughly:
Find your assessed probability of the mugger being able to deliver whatever reward, being careful to specify the size of the reward in the conditions for the probability
offer an exchange such that U(payment to mugger) < U(reward) * P(reward)
This is an issue for AI design because if you use a prior based on Kolmogorov complexity than it’s relatively straightforward to find such a reward, because even very large numbers have relatively low complexity, and therefore relatively high prior probabilities.
When you have a bunch of other data, you should be not interested in the Kolmogorov complexity of the number, you are interested in Kolmogorov complexity of other data concatenated with that number.
E.g. you should not assign higher probability that Bill Gates has made precisely 100 000 000 000 $ than some random-looking value, as given the other sensory input you got (from which you derived your world model) there are random-looking values that have even lower Kolmogorov complexity of total sensory input, but you wouldn’t be able to find those because Kolmogorov complexity is uncomputable. You end up mis-estimating Kolmogorov complexity when you don’t have it given to you on a platter pre-made.
Actually, what you should use is algorithmic (Solomonoff) probability, like AIXI does, on the history of sensory input, to weighted sum among the world models that present you with the marketing spiel of the mugger. The shortest ones simply have the mugger make it up, then there will be the models where mugger will torture beings if you pay and not torture if you don’t, it’s unclear what’s going to happen out of this and how it will pan out, because, again, uncomputable.
In the human approximation, you take what mugger says as privileged model, which is strictly speaking an invalid update (the probability jumps from effectively zero for never thinking about it, to nonzero), and the invalid updates come with a cost of being prone to losing money. The construction of model directly from what mugger says the model should be is a hack; at that point anything goes and you can have another hack, of the strategic kind, to not apply this string->model hack to ultra extraordinary claims without evidence.
The mugging is defined as having conditionality; just read Bostrom’s paper or Baumann’s reply! That Eliezer did not explicitly state the mugger’s simple algorithm, but instead implied it in his discussion of complexity and size of numbers, does not obviate this point.
Yes, this is what I was trying to say. I see how the phrase “conditionality of the reward on your assessed probability” could describe Pascal’s Wager, but not how it could describe Pascal’s Mugging.
More concisely than the original/gwern: The algorithm used by the mugger is roughly:
Find your assessed probability of the mugger being able to deliver whatever reward, being careful to specify the size of the reward in the conditions for the probability
offer an exchange such that U(payment to mugger) < U(reward) * P(reward)
This is an issue for AI design because if you use a prior based on Kolmogorov complexity than it’s relatively straightforward to find such a reward, because even very large numbers have relatively low complexity, and therefore relatively high prior probabilities.
When you have a bunch of other data, you should be not interested in the Kolmogorov complexity of the number, you are interested in Kolmogorov complexity of other data concatenated with that number.
E.g. you should not assign higher probability that Bill Gates has made precisely 100 000 000 000 $ than some random-looking value, as given the other sensory input you got (from which you derived your world model) there are random-looking values that have even lower Kolmogorov complexity of total sensory input, but you wouldn’t be able to find those because Kolmogorov complexity is uncomputable. You end up mis-estimating Kolmogorov complexity when you don’t have it given to you on a platter pre-made.
Actually, what you should use is algorithmic (Solomonoff) probability, like AIXI does, on the history of sensory input, to weighted sum among the world models that present you with the marketing spiel of the mugger. The shortest ones simply have the mugger make it up, then there will be the models where mugger will torture beings if you pay and not torture if you don’t, it’s unclear what’s going to happen out of this and how it will pan out, because, again, uncomputable.
In the human approximation, you take what mugger says as privileged model, which is strictly speaking an invalid update (the probability jumps from effectively zero for never thinking about it, to nonzero), and the invalid updates come with a cost of being prone to losing money. The construction of model directly from what mugger says the model should be is a hack; at that point anything goes and you can have another hack, of the strategic kind, to not apply this string->model hack to ultra extraordinary claims without evidence.
edit: i meant, weighted sum, not ‘select’.
The mugging is defined as having conditionality; just read Bostrom’s paper or Baumann’s reply! That Eliezer did not explicitly state the mugger’s simple algorithm, but instead implied it in his discussion of complexity and size of numbers, does not obviate this point.