Tom and Andrew, it seems very implausible that someone saying “I will kill 3^^^^3 people unless X” is literally zero Bayesian evidence that they will kill 3^^^^3 people unless X. Though I guess it could plausibly be weak enough to take much of the force out of the problem.
Andrew, if we’re in a simulation, the world containing the simulation could be able to support 3^^^^3 people. If you knew (magically) that it couldn’t, you could substitute something on the order of 10^50, which is vastly less forceful but may still lead to the same problem.
Andrew and Steve, you could replace “kill 3^^^^3 people” with “create 3^^^^3 units of disutility according to your utility function”. (I respectfully suggest that we all start using this form of the problem.)
Michael Vassar has suggested that we should consider any number of identical lives to have the same utility as one life. That could be a solution, as it’s impossible to create 3^^^^3 distinct humans. But, this also is irrelevant to the create-3^^^^3-disutility-units form.
IIRC, Peter de Blanc told me that any consistent utility function must have an upper bound (meaning that we must discount lives like Steve suggests). The problem disappears if your upper bound is low enough. Hopefully any realistic utility function has such a low upper bound, but it’d still be a good idea to solve the general problem.
I see a similarity to the police chief example. Adopting a policy of paying attention to any Pascalian muggings would encourage others to manipulate you using them. At first it doesn’t seem like this would have nearly enough disutility to justify ignoring muggings, but it might when you consider that it would interfere with responding to any real threat (unlikely as it is) of 3^^^^3 deaths.
create 3^^^^3 units of disutility according to your utility function
For all X:
If your utility function assigns values to outcomes that differ by a factor of X, then you are vulnerable to becoming a fanatic who banks on scenarios that only occur with probability 1/X. As simple as that.
If you think that banking on scenarios that only occur with probability 1/X is silly, then you have implicitly revealed that your utility function only assigns values in the range [1,Y], where Y<X, and where 1 is the lowest utility you assign.
If you think that banking on scenarios that only occur with probability 1/X is silly, then you have implicitly revealed that your utility function only assigns values in the range [1,Y], where Y<X, and where 1 is the lowest utility you assign.
… or your judgments of silliness are out of line with your utility function.
When I said “Silly” I meant from an axiological point of view, i.e. you think the scenario over, and you still think that you would be doing something that made you win less.
Of course in any such case, there are likely to be conflicting intuitions: one to behave as an aggregative consequentialist, and the another to behave like a sane human being.
Michael Vassar has suggested that we should consider any number of identical lives to have the same utility as one life. That could be a solution, as it’s impossible to create 3^^^^3 distinct humans. But, this also is irrelevant to the create-3^^^^3-disutility-units form.
What if we required that the utility function grow no faster than the Kolmogorov complexity of the scenario? This seems like a suitable generalization of Vassar’s proposal.
Tom and Andrew, it seems very implausible that someone saying “I will kill 3^^^^3 people unless X” is literally zero Bayesian evidence that they will kill 3^^^^3 people unless X. Though I guess it could plausibly be weak enough to take much of the force out of the problem.
Andrew, if we’re in a simulation, the world containing the simulation could be able to support 3^^^^3 people. If you knew (magically) that it couldn’t, you could substitute something on the order of 10^50, which is vastly less forceful but may still lead to the same problem.
Andrew and Steve, you could replace “kill 3^^^^3 people” with “create 3^^^^3 units of disutility according to your utility function”. (I respectfully suggest that we all start using this form of the problem.)
Michael Vassar has suggested that we should consider any number of identical lives to have the same utility as one life. That could be a solution, as it’s impossible to create 3^^^^3 distinct humans. But, this also is irrelevant to the create-3^^^^3-disutility-units form.
IIRC, Peter de Blanc told me that any consistent utility function must have an upper bound (meaning that we must discount lives like Steve suggests). The problem disappears if your upper bound is low enough. Hopefully any realistic utility function has such a low upper bound, but it’d still be a good idea to solve the general problem.
I see a similarity to the police chief example. Adopting a policy of paying attention to any Pascalian muggings would encourage others to manipulate you using them. At first it doesn’t seem like this would have nearly enough disutility to justify ignoring muggings, but it might when you consider that it would interfere with responding to any real threat (unlikely as it is) of 3^^^^3 deaths.
For all X:
If your utility function assigns values to outcomes that differ by a factor of X, then you are vulnerable to becoming a fanatic who banks on scenarios that only occur with probability 1/X. As simple as that.
If you think that banking on scenarios that only occur with probability 1/X is silly, then you have implicitly revealed that your utility function only assigns values in the range [1,Y], where Y<X, and where 1 is the lowest utility you assign.
… or your judgments of silliness are out of line with your utility function.
When I said “Silly” I meant from an axiological point of view, i.e. you think the scenario over, and you still think that you would be doing something that made you win less.
Of course in any such case, there are likely to be conflicting intuitions: one to behave as an aggregative consequentialist, and the another to behave like a sane human being.
What if we required that the utility function grow no faster than the Kolmogorov complexity of the scenario? This seems like a suitable generalization of Vassar’s proposal.