Certainly, but how is a bounded utility function anything other than a way of sneaking in a ‘delimited sphere of consequence’, except that perhaps the ‘sphere’ fades out gradually, like a Gaussian rather than a uniform distribution?
To be clear, we should disentangle the agent’s own utility function from what the agent thinks is ethical. If the agent is prepared to throw ethics to the wind then it’s impervious to Pascal’s Mugging. If the agent is a consequentialist who sees ethics as optimization of “the universe’s utility function” then Pascal’s Mugging becomes a problem, but yes, taking the universe to have a bounded utility function solves the problem. But now let’s see what follows from this. Either:
We have to ‘weight’ people ‘close to us’ much more highly than people far away when calculating which of our actions are ‘right’. So in effect, we end up being deontologists who say we have special obligations towards friends and family that we don’t have towards strangers. (Delimited sphere of consequence.)
If we still try to account for all people equally regardless of their proximity to us, and still have a bounded utility function, then upon learning that the universe is Vast (with, say, Graham’s number of people in it) we infer that the universe is ‘morally insensitive’ to the deaths of huge numbers of people, whoever they are: Suppose we escape Pascal’s Mugging by deciding that, in such a vast universe, a 1/N chance of M people dying is something we can live with (for some M >> N >> 1.) Then if we knew for sure that the universe was Vast, we ought to be able to ‘live with’ a certainty of M/N people dying. And if we’re denying that it makes a moral difference how close these people are to us then these M/N people may as well include, say, the citizens of one of Earth’s continents. So then if a mad tyrant gives you perfect assurance that they will nuke South America unless you give them your Mars bar (and perfect assurance that they won’t if you do) then apparently you should refuse to hand it over (on pain of inconsistency with your response to Pascal’s Mugging.)
To answer (2), your utility function can have more than one reason to value people not dying. For example, You could have one component of utility for the total number of people alive, and another for the fraction of people who lead good lives. Since having their lives terminated decreases the quality of life, killing those people would make a difference to the average quality of life across the multiverse, if the multiverse is finite.
If the multiverse is infinite, then something like “caring about people close to you” is required for consequentialism to work.
Ah, I see.
Having a ‘limited sphere of consequence’ is actually one of the core ideas of deontology (though of course they don’t put it quite like that).
Speaking for myself, although it does seem like an ugly hack, I can’t see any other way of escaping the paranoia of “Pascal’s Mugging”.
Well, one way is to have a bounded utility function. Then Pascal Mugging is not a problem.
Certainly, but how is a bounded utility function anything other than a way of sneaking in a ‘delimited sphere of consequence’, except that perhaps the ‘sphere’ fades out gradually, like a Gaussian rather than a uniform distribution?
To be clear, we should disentangle the agent’s own utility function from what the agent thinks is ethical. If the agent is prepared to throw ethics to the wind then it’s impervious to Pascal’s Mugging. If the agent is a consequentialist who sees ethics as optimization of “the universe’s utility function” then Pascal’s Mugging becomes a problem, but yes, taking the universe to have a bounded utility function solves the problem. But now let’s see what follows from this. Either:
We have to ‘weight’ people ‘close to us’ much more highly than people far away when calculating which of our actions are ‘right’. So in effect, we end up being deontologists who say we have special obligations towards friends and family that we don’t have towards strangers. (Delimited sphere of consequence.)
If we still try to account for all people equally regardless of their proximity to us, and still have a bounded utility function, then upon learning that the universe is Vast (with, say, Graham’s number of people in it) we infer that the universe is ‘morally insensitive’ to the deaths of huge numbers of people, whoever they are: Suppose we escape Pascal’s Mugging by deciding that, in such a vast universe, a 1/N chance of M people dying is something we can live with (for some M >> N >> 1.) Then if we knew for sure that the universe was Vast, we ought to be able to ‘live with’ a certainty of M/N people dying. And if we’re denying that it makes a moral difference how close these people are to us then these M/N people may as well include, say, the citizens of one of Earth’s continents. So then if a mad tyrant gives you perfect assurance that they will nuke South America unless you give them your Mars bar (and perfect assurance that they won’t if you do) then apparently you should refuse to hand it over (on pain of inconsistency with your response to Pascal’s Mugging.)
To answer (2), your utility function can have more than one reason to value people not dying. For example, You could have one component of utility for the total number of people alive, and another for the fraction of people who lead good lives. Since having their lives terminated decreases the quality of life, killing those people would make a difference to the average quality of life across the multiverse, if the multiverse is finite.
If the multiverse is infinite, then something like “caring about people close to you” is required for consequentialism to work.
Actually I think I’ll take that back. It depends on exactly how things play out.