There are other ways of taking Pascal’s mugging into account. You shouldn’t do that based on lack of computing power. And if you aren’t doing it based on lack of computing power, why involve randomness at all? Why not work out what an agent would probably do after N samples, or something like that?
You shouldn’t do that based on lack of computing power. And if you aren’t doing it based on lack of computing power, why involve randomness at all?
Well, it’s partially because sampling-based approximate inference algorithms are massively faster than real marginalization over large numbers of nuisance variables. It’s also because using sampling-based inference makes all the expectations behave correctly in the limit while still yielding boundedly approximately correct reasoning even when compute-power is very limited.
So we beat the Mugging while also being able to have an unbounded utility function, because even in the limit, Mugging-level absurd possible-worlds can only dominate our decision-making an overwhelmingly tiny fraction of the time (when the sample size is more than the multiplicative inverse of their probability, which basically never happens in reality).
Importance sampling wouldn’t have you ignore Pascal’s Muggings, though. At its most basic, ‘sampling’ is just a way of probabilistically computing an integral.
Importance sampling wouldn’t have you ignore Pascal’s Muggings, though.
Well, they shouldn’t be ignored, as long as they have some finite probability. The idea is that by sampling (importance or otherwise), we almost never give in to it, we always spend our finite computing power on strictly more probable scenarios, even though the Mugging (by definition) would dominate our expected-utility calculation in the case of a completed infinity.
There are other ways of taking Pascal’s mugging into account. You shouldn’t do that based on lack of computing power. And if you aren’t doing it based on lack of computing power, why involve randomness at all? Why not work out what an agent would probably do after N samples, or something like that?
Well, it’s partially because sampling-based approximate inference algorithms are massively faster than real marginalization over large numbers of nuisance variables. It’s also because using sampling-based inference makes all the expectations behave correctly in the limit while still yielding boundedly approximately correct reasoning even when compute-power is very limited.
So we beat the Mugging while also being able to have an unbounded utility function, because even in the limit, Mugging-level absurd possible-worlds can only dominate our decision-making an overwhelmingly tiny fraction of the time (when the sample size is more than the multiplicative inverse of their probability, which basically never happens in reality).
Importance sampling wouldn’t have you ignore Pascal’s Muggings, though. At its most basic, ‘sampling’ is just a way of probabilistically computing an integral.
Well, they shouldn’t be ignored, as long as they have some finite probability. The idea is that by sampling (importance or otherwise), we almost never give in to it, we always spend our finite computing power on strictly more probable scenarios, even though the Mugging (by definition) would dominate our expected-utility calculation in the case of a completed infinity.