Thought: I think Pascal’s Mugging can’t harm boundedly rational agents. If an agent is bounded in its computing power, then what it ought to do is draw some bounded number of samples from its mixture model of possible worlds, and then evaluate the expected value of its actions in the sample rather than across the entire mixture. As the available computing power approaches infinity, the sample size approaches infinity, and the sample more closely resembles the true distribution, thus causing the expected utility calculation to approach the true expected utility across the infinite ensemble of possible worlds. But, as long as we employ a finite sample, the more-probable worlds are so overwhelmingly more likely to be sampled that the boundedly rational agent will never waste its finite computing power on Pascal’s Muggings: it will spend more computing power examining the possibility that it has spontaneously come into existence as a consequence of an Infinite Improbability Drive being ignited in its near vicinity than on true Muggings.
There are other ways of taking Pascal’s mugging into account. You shouldn’t do that based on lack of computing power. And if you aren’t doing it based on lack of computing power, why involve randomness at all? Why not work out what an agent would probably do after N samples, or something like that?
You shouldn’t do that based on lack of computing power. And if you aren’t doing it based on lack of computing power, why involve randomness at all?
Well, it’s partially because sampling-based approximate inference algorithms are massively faster than real marginalization over large numbers of nuisance variables. It’s also because using sampling-based inference makes all the expectations behave correctly in the limit while still yielding boundedly approximately correct reasoning even when compute-power is very limited.
So we beat the Mugging while also being able to have an unbounded utility function, because even in the limit, Mugging-level absurd possible-worlds can only dominate our decision-making an overwhelmingly tiny fraction of the time (when the sample size is more than the multiplicative inverse of their probability, which basically never happens in reality).
Importance sampling wouldn’t have you ignore Pascal’s Muggings, though. At its most basic, ‘sampling’ is just a way of probabilistically computing an integral.
Importance sampling wouldn’t have you ignore Pascal’s Muggings, though.
Well, they shouldn’t be ignored, as long as they have some finite probability. The idea is that by sampling (importance or otherwise), we almost never give in to it, we always spend our finite computing power on strictly more probable scenarios, even though the Mugging (by definition) would dominate our expected-utility calculation in the case of a completed infinity.
Thought: I think Pascal’s Mugging can’t harm boundedly rational agents. If an agent is bounded in its computing power, then what it ought to do is draw some bounded number of samples from its mixture model of possible worlds, and then evaluate the expected value of its actions in the sample rather than across the entire mixture. As the available computing power approaches infinity, the sample size approaches infinity, and the sample more closely resembles the true distribution, thus causing the expected utility calculation to approach the true expected utility across the infinite ensemble of possible worlds. But, as long as we employ a finite sample, the more-probable worlds are so overwhelmingly more likely to be sampled that the boundedly rational agent will never waste its finite computing power on Pascal’s Muggings: it will spend more computing power examining the possibility that it has spontaneously come into existence as a consequence of an Infinite Improbability Drive being ignited in its near vicinity than on true Muggings.
There are other ways of taking Pascal’s mugging into account. You shouldn’t do that based on lack of computing power. And if you aren’t doing it based on lack of computing power, why involve randomness at all? Why not work out what an agent would probably do after N samples, or something like that?
Well, it’s partially because sampling-based approximate inference algorithms are massively faster than real marginalization over large numbers of nuisance variables. It’s also because using sampling-based inference makes all the expectations behave correctly in the limit while still yielding boundedly approximately correct reasoning even when compute-power is very limited.
So we beat the Mugging while also being able to have an unbounded utility function, because even in the limit, Mugging-level absurd possible-worlds can only dominate our decision-making an overwhelmingly tiny fraction of the time (when the sample size is more than the multiplicative inverse of their probability, which basically never happens in reality).
Importance sampling wouldn’t have you ignore Pascal’s Muggings, though. At its most basic, ‘sampling’ is just a way of probabilistically computing an integral.
Well, they shouldn’t be ignored, as long as they have some finite probability. The idea is that by sampling (importance or otherwise), we almost never give in to it, we always spend our finite computing power on strictly more probable scenarios, even though the Mugging (by definition) would dominate our expected-utility calculation in the case of a completed infinity.