I just read this thread today. I made a clarification upthread about the description of my comment above, under Louie’s. Also, I’d like to register that I thought your characterization of that interview as such was fine, even without the clarifications you make here.
They both argue that standard Bayesian inference indicates against the literal use of non-robust expected value estimates, particularly in “Pascal’s Mugging” type scenarios.
As a technical point, I don’t think these posts address “Pascal’s Mugging” scenarios in any meaningful way.
Bayesian adjustment is a standard part of Pascal’s Mugging. The problem is that Solomonoff complexity priors have fat tails, because describing fundamental laws of physics that allow large payoffs is not radically more complex than laws that only allow small payoffs. It doesn’t take an extra 10^1000 bits to describe a world where an action generates 2^(10^1000) times as many, e.g. happy puppies. So we can’t rule out black swansa priori in that framework (without something like an anthropic assumption that amounts to the Doomsday Argument).
The only thing in your posts that could help with Pascal’s Mugging is the assumption of infinite certainty in a distribution without relevantly fat tails or black swans, like a normal or log-normal distribution. But that would be an extreme move, taking coherent worlds of equal simplicity and massively penalizing the ones with high payoffs, so that no evidence that could fit in a human brain could convince us we were in the high-payoff worlds. Without some justification, that seems to amount to assuming the problem away, not addressing it.
Disclaimer 1: This is about expected value measured in the currency of “goods” like happy puppies, rather than expected utility, since agents can have bounded utility, e.g. simply not caring much more about saving a billion billion puppies rather than a billion. This seems fairly true of most people, at least emotionally.
Disclaimer 2: Occam’s razor priors give high value to Pascal’s Mugging cases, but they also give higher expectations to all other actions. For instance, the chance that space colonization will let huge populations be created increases the expected value of reducing existential risk by many orders of magnitude to total utilitarians. But it also greatly increases the expected payoffs of anything else that reduces existential risk by even a little. So if vaccinating African kids is expected to improve the odds of human survival going forward (not obvious but plausible) then its expected value will be driven to within sight of focused existential risk reductions, e.g. vaccination might be a billionth the cost-effectiveness of focused risk-reduction efforts but probably not smaller by a factor of 10^20. By the same token, different focused existential risk interventions will compete against one another, so one will not want to support the relatively ineffective ones.
Carl, it looks like we have a pretty substantial disagreement about key properties of the appropriate prior distribution over expected value of one’s actions.
I am not sure whether you are literally endorsing a particular distribution (I am not sure whether “Solomonoff complexity prior” is sufficiently well-defined or, if so, whether you are endorsing that or a varied/adjusted version). I myself have not endorsed a particular distribution. So it seems like the right way to resolve our disagreement is for at least one of us to be more specific about what properties are core to our argument and why we believe any reasonable prior ought to have these properties. I’m not sure when I will be able to do this on my end and will likely contact you by email when I do.
What I do not agree with is the implication that my analysis is irrelevant to Pascal’s Mugging. It may be irrelevant for people who endorse the sorts of priors you endorse. But not everyone agrees with you about what the proper prior looks like, and many people who are closer to me on what the appropriate prior looks like still seem unaware of the implications for Pascal’s Mugging. If nothing else, my analysis highlights a relationship between one’s prior distribution and Pascal’s Mugging that I believe many others weren’t aware of. Whether it is a decisive refutation of Pascal’s Mugging is unresolved (and depends on the disagreement I refer to above).
Hi Holden,
I just read this thread today. I made a clarification upthread about the description of my comment above, under Louie’s. Also, I’d like to register that I thought your characterization of that interview as such was fine, even without the clarifications you make here.
As a technical point, I don’t think these posts address “Pascal’s Mugging” scenarios in any meaningful way.
Bayesian adjustment is a standard part of Pascal’s Mugging. The problem is that Solomonoff complexity priors have fat tails, because describing fundamental laws of physics that allow large payoffs is not radically more complex than laws that only allow small payoffs. It doesn’t take an extra 10^1000 bits to describe a world where an action generates 2^(10^1000) times as many, e.g. happy puppies. So we can’t rule out black swans a priori in that framework (without something like an anthropic assumption that amounts to the Doomsday Argument).
The only thing in your posts that could help with Pascal’s Mugging is the assumption of infinite certainty in a distribution without relevantly fat tails or black swans, like a normal or log-normal distribution. But that would be an extreme move, taking coherent worlds of equal simplicity and massively penalizing the ones with high payoffs, so that no evidence that could fit in a human brain could convince us we were in the high-payoff worlds. Without some justification, that seems to amount to assuming the problem away, not addressing it.
Disclaimer 1: This is about expected value measured in the currency of “goods” like happy puppies, rather than expected utility, since agents can have bounded utility, e.g. simply not caring much more about saving a billion billion puppies rather than a billion. This seems fairly true of most people, at least emotionally.
Disclaimer 2: Occam’s razor priors give high value to Pascal’s Mugging cases, but they also give higher expectations to all other actions. For instance, the chance that space colonization will let huge populations be created increases the expected value of reducing existential risk by many orders of magnitude to total utilitarians. But it also greatly increases the expected payoffs of anything else that reduces existential risk by even a little. So if vaccinating African kids is expected to improve the odds of human survival going forward (not obvious but plausible) then its expected value will be driven to within sight of focused existential risk reductions, e.g. vaccination might be a billionth the cost-effectiveness of focused risk-reduction efforts but probably not smaller by a factor of 10^20. By the same token, different focused existential risk interventions will compete against one another, so one will not want to support the relatively ineffective ones.
Carl, it looks like we have a pretty substantial disagreement about key properties of the appropriate prior distribution over expected value of one’s actions.
I am not sure whether you are literally endorsing a particular distribution (I am not sure whether “Solomonoff complexity prior” is sufficiently well-defined or, if so, whether you are endorsing that or a varied/adjusted version). I myself have not endorsed a particular distribution. So it seems like the right way to resolve our disagreement is for at least one of us to be more specific about what properties are core to our argument and why we believe any reasonable prior ought to have these properties. I’m not sure when I will be able to do this on my end and will likely contact you by email when I do.
What I do not agree with is the implication that my analysis is irrelevant to Pascal’s Mugging. It may be irrelevant for people who endorse the sorts of priors you endorse. But not everyone agrees with you about what the proper prior looks like, and many people who are closer to me on what the appropriate prior looks like still seem unaware of the implications for Pascal’s Mugging. If nothing else, my analysis highlights a relationship between one’s prior distribution and Pascal’s Mugging that I believe many others weren’t aware of. Whether it is a decisive refutation of Pascal’s Mugging is unresolved (and depends on the disagreement I refer to above).