The actual probability is either 0 or 1 (either happens or doesn’t happen). Values in-between quantify ignorance and partial knowledge (e.g. when you have no reason to prefer one side of the die to the other), or, at times, are chosen very arbitrarily (what is the probability that a physics theory is “correct”).
I don’t know if those things have such extremes in low probability vs high utility to be called pascal’s mugging.
New names for same things are kind of annoying, to be honest, especially ill chosen… if it happens by your own contemplation, I’d call it Pascal’s Wager. Mugging implies someone making threats, scam is more general and can involve promises of reward. Either way the key is the high payoff proposition wrecking some havoc, either through it’s prior probability being too high, other propositions having been omitted, or the like.
But even so, the human brain doesn’t operate on anything like Solomonoff induction, Bayesian probability theory, or expected utility maximization.
The actual probability is either 0 or 1 (either happens or doesn’t happen).
Yes but the goal is to assign whatever outcome that will actually happen with the highest probability as possible, using whatever information we have. The fact that some outcomes result in ridiculously huge utility gains does not imply anything about how likely they are to happen, so there is no reason that should be taken into account (unless it actually does, in which case it should.)
New names for same things are kind of annoying, to be honest, especially ill chosen… if it happens by your own contemplation, I’d call it Pascal’s Wager. Mugging implies someone making threats, scam is more general and can involve promises of reward. Either way the key is the high payoff proposition wrecking some havoc, either through it’s prior probability being too high, other propositions having been omitted, or the like.
Pascal’s mugging was an absurd scenario with absurd rewards that approach infinity. What you are talking about is just normal everyday scams. Most scams do not promise such huge rewards or have such low probabilities (if you didn’t know any better it is feasible that someone could have an awesome invention or need your help with transaction fees.)
And the problem with scams is that people overestimate their probability. If they were to consider how many emails in the world are actually from Nigerian Princes vs scammers, or how many people promise awesome inventions without any proof they will actually work, they would reconsider. In pascal’s mugging, you fall for it even after having considered the probability of it happening in detail.
Your probability estimation could be absolutely correct. Maybe 1 out of a trillion times a person meets someone claiming to be a matrix lord, they are actually telling the truth. And they still end up getting scammed, so that the 1 in a trillionth counter-factual of themselves gets infinite reward.
But even so, the human brain doesn’t operate on anything like Solomonoff induction, Bayesian probability theory, or expected utility maximization.
People are still agents, though.
They are agents, but they aren’t subject to this specific problem because we don’t really use expected utility maximization. At best maybe some kind of poor approximation of it. But it is a problem for building AIs or any kind of computer system that makes decisions based on probabilities.
Maybe 1 out of a trillion times a person meets someone claiming to be a matrix lord, they are actually telling the truth
I think you’re considering a different problem than Pascal’s Mugging, if you’re taking it as a given that the probabilities are indeed 1 in a trillion (or for that matter 1 in 10). The original problem doesn’t make such an assumption.
What you have in mind, the case of definitely known probabilities, seems to me more like The LifeSpan dilemma where e.g. “an unbounded utility on lifespan implies willingness to trade an 80% probability of living some large number of years for a 1/(3^^^3) probability of living some sufficiently longer lifespan”
The wiki page on it seems to suggest that this is the problem.
If an agent’s utilities over outcomes can potentially grow much faster than the probability of those outcomes diminishes, then it will be dominated by tiny probabilities of hugely important outcomes; speculations about low-probability-high-stakes scenarios will come to dominate his moral decision making… The agent would always have to take those kinds of actions with far-fetched results, that have low but non-negligible probabilities but extremely high returns.
This is seen as an unreasonable result. Intuitively, one is not inclined to acquiesce to the mugger’s demands—or even pay all that much attention one way or another—but what kind of prior does this imply?
Also this
Peter de Blanc has proven[1] that if an agent assigns a finite probability to all computable hypotheses and assigns unboundedly large finite utilities over certain environment inputs, then the expected utility of any outcome is undefined.
which is pretty concerning.
I’m curious what you think the problem with Pascal’s Mugging is though. That you can’t easily estimate the probability of such a situation? Well that is true of anything and isn’t really unique to Pascal’s Mugging. But we can still approximate probabilities. A necessary evil to live in a probabilistic world without the ability to do perfect Bayesian updates on all available information, or unbiased priors.
I’m curious what you think the problem with Pascal’s Mugging is though.
Bad math being internally bad, that’s the problem. Nothing to do with any worlds, real or imaginary, just a case of internally bad math—utilities are undefined, it is undefined if you pay up or not, the actions chosen are undefined. Akin to maximizing blerg without any definition of what blerg even is—maximizing “expected utility” without having defined it.
Speed prior works, for example (it breaks some assumptions of Blanc. Namely, the probability is not bounded from below by any computable function of length of the hypothesis).
utilities are undefined, it is undefined if you pay up or not, the actions chosen are undefined. Akin to maximizing blerg without any definition of what blerg even is—maximizing “expected utility” without having defined it.
Call it undefined if you like, but I’d still prefer 3^^^3 people not suffer. It would be pretty weird to argue that human lives decay in utility based on how many there are. If you found out that the universe was bigger than you thought, that there really were far more humans in the universe somehow, would you just stop caring about human life?
It would also be pretty hard to argue that at least some small amount of money isn’t worth giving in order to save a human life, or that giving a small amount of money isn’t worth a small probability of saving enough lives to make up for the small probability.
It would be pretty weird to argue that human lives decay in utility based on how many there are.
Well, suppose there’s mind uploads, and one mind upload is very worried about himself so he runs himself multiply redundant with 5 exact copies. Should this upload be a minor utility monster?
3^^^3 is far more than there are possible people.
If you found out that the universe was bigger than you thought, that there really were far more humans in the universe somehow, would you just stop caring about human life?
Bounded doesn’t mean it just hits a cap and stays there. Also, if you scale all utilities that you can effect down it changes nothing about actions (another confusion—mapping the utility to how much one cares).
And yes there are definitely cases where money are worth small probability of saving lives, and everyone agrees on such—e.g. if we find out that an asteroid has certain chance to hit Earth, we’d give money to space agencies, even when chance is rather minute (we’d not give money to cold fusion crackpots though). There’s nothing fundamentally wrong with spending a bit to avert a small probability of something terrible happening. The problem arises when the probability is overestimated, when the consequences are poorly evaluated, etc. It is actively harmful for example to encourage boys to cry wolf needlessly. I’m thinking people sort of feel innately that if they are giving money away—losing—some giant fairness fairy is going to make the result more likely good than bad for everyone. World doesn’t work like this; all those naive folks who jump on opportunity to give money to someone promising to save the world, no matter how ignorant, uneducated, or crackpotty that person is in the fields where correctness can be checked at all, they are increasing risk, not decreasing.
It would be pretty weird to argue that human lives decay in utility based on how many there are.
Maybe not as weird as all that. Given a forced choice between killing A and B where I know nothing about them, I flip a coin, but add the knowledge that A is a duplicate of C and B is not a duplicate of anyone, and I choose A quite easily. I conclude from this that I value unique human lives quite a lot more than I value non-unique human lives. As others have pointed out, the number of unique human lives is finite, and the number of lives I consider worth living necessarily even lower, so the more people there are living lives worth living, the less unique any individual is, and therefore the less I value any individual life. (Insofar as my values are consistent, anyway. Which of course they aren’t, but this whole “lets pretend” game of utility calculation that we enjoy playing depends on treating them as though they were.)
The actual probability is either 0 or 1 (either happens or doesn’t happen). Values in-between quantify ignorance and partial knowledge (e.g. when you have no reason to prefer one side of the die to the other), or, at times, are chosen very arbitrarily (what is the probability that a physics theory is “correct”).
New names for same things are kind of annoying, to be honest, especially ill chosen… if it happens by your own contemplation, I’d call it Pascal’s Wager. Mugging implies someone making threats, scam is more general and can involve promises of reward. Either way the key is the high payoff proposition wrecking some havoc, either through it’s prior probability being too high, other propositions having been omitted, or the like.
People are still agents, though.
Yes but the goal is to assign whatever outcome that will actually happen with the highest probability as possible, using whatever information we have. The fact that some outcomes result in ridiculously huge utility gains does not imply anything about how likely they are to happen, so there is no reason that should be taken into account (unless it actually does, in which case it should.)
Pascal’s mugging was an absurd scenario with absurd rewards that approach infinity. What you are talking about is just normal everyday scams. Most scams do not promise such huge rewards or have such low probabilities (if you didn’t know any better it is feasible that someone could have an awesome invention or need your help with transaction fees.)
And the problem with scams is that people overestimate their probability. If they were to consider how many emails in the world are actually from Nigerian Princes vs scammers, or how many people promise awesome inventions without any proof they will actually work, they would reconsider. In pascal’s mugging, you fall for it even after having considered the probability of it happening in detail.
Your probability estimation could be absolutely correct. Maybe 1 out of a trillion times a person meets someone claiming to be a matrix lord, they are actually telling the truth. And they still end up getting scammed, so that the 1 in a trillionth counter-factual of themselves gets infinite reward.
They are agents, but they aren’t subject to this specific problem because we don’t really use expected utility maximization. At best maybe some kind of poor approximation of it. But it is a problem for building AIs or any kind of computer system that makes decisions based on probabilities.
I think you’re considering a different problem than Pascal’s Mugging, if you’re taking it as a given that the probabilities are indeed 1 in a trillion (or for that matter 1 in 10). The original problem doesn’t make such an assumption.
What you have in mind, the case of definitely known probabilities, seems to me more like The LifeSpan dilemma where e.g. “an unbounded utility on lifespan implies willingness to trade an 80% probability of living some large number of years for a 1/(3^^^3) probability of living some sufficiently longer lifespan”
The wiki page on it seems to suggest that this is the problem.
Also this
which is pretty concerning.
I’m curious what you think the problem with Pascal’s Mugging is though. That you can’t easily estimate the probability of such a situation? Well that is true of anything and isn’t really unique to Pascal’s Mugging. But we can still approximate probabilities. A necessary evil to live in a probabilistic world without the ability to do perfect Bayesian updates on all available information, or unbiased priors.
I abhor using unnecessary novel jargon.
Bad math being internally bad, that’s the problem. Nothing to do with any worlds, real or imaginary, just a case of internally bad math—utilities are undefined, it is undefined if you pay up or not, the actions chosen are undefined. Akin to maximizing blerg without any definition of what blerg even is—maximizing “expected utility” without having defined it.
Speed prior works, for example (it breaks some assumptions of Blanc. Namely, the probability is not bounded from below by any computable function of length of the hypothesis).
Call it undefined if you like, but I’d still prefer 3^^^3 people not suffer. It would be pretty weird to argue that human lives decay in utility based on how many there are. If you found out that the universe was bigger than you thought, that there really were far more humans in the universe somehow, would you just stop caring about human life?
It would also be pretty hard to argue that at least some small amount of money isn’t worth giving in order to save a human life, or that giving a small amount of money isn’t worth a small probability of saving enough lives to make up for the small probability.
Well, suppose there’s mind uploads, and one mind upload is very worried about himself so he runs himself multiply redundant with 5 exact copies. Should this upload be a minor utility monster?
3^^^3 is far more than there are possible people.
Bounded doesn’t mean it just hits a cap and stays there. Also, if you scale all utilities that you can effect down it changes nothing about actions (another confusion—mapping the utility to how much one cares).
And yes there are definitely cases where money are worth small probability of saving lives, and everyone agrees on such—e.g. if we find out that an asteroid has certain chance to hit Earth, we’d give money to space agencies, even when chance is rather minute (we’d not give money to cold fusion crackpots though). There’s nothing fundamentally wrong with spending a bit to avert a small probability of something terrible happening. The problem arises when the probability is overestimated, when the consequences are poorly evaluated, etc. It is actively harmful for example to encourage boys to cry wolf needlessly. I’m thinking people sort of feel innately that if they are giving money away—losing—some giant fairness fairy is going to make the result more likely good than bad for everyone. World doesn’t work like this; all those naive folks who jump on opportunity to give money to someone promising to save the world, no matter how ignorant, uneducated, or crackpotty that person is in the fields where correctness can be checked at all, they are increasing risk, not decreasing.
Maybe not as weird as all that. Given a forced choice between killing A and B where I know nothing about them, I flip a coin, but add the knowledge that A is a duplicate of C and B is not a duplicate of anyone, and I choose A quite easily. I conclude from this that I value unique human lives quite a lot more than I value non-unique human lives. As others have pointed out, the number of unique human lives is finite, and the number of lives I consider worth living necessarily even lower, so the more people there are living lives worth living, the less unique any individual is, and therefore the less I value any individual life. (Insofar as my values are consistent, anyway. Which of course they aren’t, but this whole “lets pretend” game of utility calculation that we enjoy playing depends on treating them as though they were.)