you just need to make it so the supremum of them their value is 1 and the infimum is 0.
Fair. Intuitively though, this feels more like a rescaling of an underlying satisfaction measure than a plausible definition of satisfaction to me. That said, if you’re a preferentist, I accept this is internally consistent, and likely an improvement on alternative versions of preferentism.
One issue with only having boundedness above is that is that the expected of life satisfaction for an arbitrary agent would probably often be undefined or −∞ in expectation
Yes, and I am obviously not proposing a solution to this problem! More just suggesting that, if there are infinities in the problem that appear to correspond to actual things we care about, then defining them out of existence seems more like deprioritising the problem than solving it.
The utility monster feels an incredibly strong need to have everyone on Earth be tortured
I think this framing muddies the intuition pump by introducing sadistic preferences, rather than focusing just on unboundedness below. I don’t think it’s necessary to do this: unboundedness below means there’s a sense in which everyone is a potential “negative utility monster” if you torture them long enough. I think the core issue here is whether there’s some point at which we just stop caring, or whether that’s morally repugnant.
in order to act, you need more than just a consistent preference order over possible universe. In reality, you only get to choose between probability distributions over possible worlds, not specific possible worlds
Sorry, sloppy wording on my part. The question should have been “does this actually prevent us having a consistent preference ordering over gambles over universes” (even if we are not able to represent those preferences as maximising the expectation of a real-valued social welfare function)? We know (from lexicographic preferences) that “no-real-valued-utility-function-we-are-maximising-expectations-of” does not immediately imply “no-consistent-preference-ordering” (if we’re willing to accept orderings that violate continuity). So pointing to undefined expectations doesn’t seem to immediately rule out consistent choice.
I think this framing muddies the intuition pump by introducing sadistic preferences, rather than focusing just on unboundedness below. I don’t think it’s necessary to do this: unboundedness below means there’s a sense in which everyone is a potential “negative utility monster” if you torture them long enough. I think the core issue here is whether there’s some point at which we just stop caring, or whether that’s morally repugnant.
Fair enough. So I’ll provide a non-sadistic scenario. Consider again the scenario I previously described in which you have a 0.5 chance of being tortured for 3^^^^3 years, but also have the repeated opportunity to cause yourself minor discomfort in the case of not being tortured and as a result get your possible torture sentence reduced by 50 years.
If you have an unbounded below utility function in which each 50 years causes a linear decrease in satisfaction or utility, then to maximize expected utility or life satisfaction, it seems you would need to opt for living in extreme discomfort in the non-torture scenario to decrease your possible torture time be an astronomically small proportion, provided the expectations are defined.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
Sorry, sloppy wording on my part. The question should have been “does this actually prevent us having a consistent preference ordering over gambles over universes” (even if we are not able to represent those preferences as maximising the expectation of a real-valued social welfare function)? We know (from lexicographic preferences) that “no-real-valued-utility-function-we-are-maximising-expectations-of” does not immediately imply “no-consistent-preference-ordering” (if we’re willing to accept orderings that violate continuity). So pointing to undefined expectations doesn’t seem to immediately rule out consistent choice.
Oh, I see. And yes, you can have consistent preference orderings that aren’t represented as a utility function. And such techniques have been proposed before in infinite ethics. For example, one of Bostrom’s proposals to deal with infinite ethics is the extended decision rule. Essentially, it says to first look at the set of actions you could take that would maximize P(infinite good) - P(infinite bad). If there is only one such action, take it. Otherwise, take whatever action among these that has highest expected moral value given a finite universe.
As far as I know, you can’t represent the above as a utility function, despite it being consistent.
However, the big problem with the above decision rule is that it suffers from the fanaticism problem: people would be willing to bear any finite cost, even 3^^^3 years of torture, to have even an unfathomably small chance of increasing the probability of infinite good or decreasing the probability of infinite bad. And this can get to pretty ridiculous levels. For example, suppose you are sure you can easily design a world that makes every creature happy and greatly increases the moral value of the world in a finite universe if implemented. However, you know that coming up with such a design would take one second of computation on your supercomputer, which means one less second to keep thinking about astronomically-improbable situations in which you could cause infinite good. Thus would have some minuscule chance of avoiding infinite good or causing infinite bad. Thus, you decide to not help anyone, because you won’t spare the 1 second of computer time.
More generally, I think the basic property of non-real-valued consistent preference orderings is that they value some things “infinitely more” than others. The issue is, if you really value some property infinitely more than some other property of lesser importance, it won’t be worth your time to even consider pursuing the property of lesser importance, because it’s always possible you could have used the extra computation to slightly increase your chances of getting the property of greater importance.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
FWIW, this conclusion is not clear to me. To return to one of my original points: I don’t think you can dodge this objection by arguing from potentially idiosyncratic preferences, even perfectly reasonable ones; rather, you need it to be the case that no rational agent could have different preferences. Either that, or you need to be willing to override otherwise rational individual preferences when making interpersonal tradeoffs.
To be honest, I’m actually not entirely averse to the latter option: having interpersonal trade-offs determined by contingent individual risk-preferences has never seemed especially well-justified to me (particularly if probability is in the mind). But I confess it’s not clear whether that route is open to you, given the motivation for your system as a whole.
More generally, I think the basic property of non-real-valued consistent preference orderings is that they value some things “infinitely more” than others.
FWIW, this conclusion is not clear to me. To return to one of my original points: I don’t think you can dodge this objection by arguing from potentially idiosyncratic preferences, even perfectly reasonable ones; rather, you need it to be the case that no rational agent could have different preferences. Either that, or you need to be willing to override otherwise rational individual preferences when making interpersonal tradeoffs.
Yes, that’s correct. It’s possible that there are some agents with consistent preferences that really would wish to get extraordinarily uncomfortable to avoid the torture. My point was just that this doesn’t seem like it would would be a common thing for agents to want.
Still, it is conceivable that there are at least a few agents out their that would consistently want to opt for the 0.5 chance of being extremely uncomfortable option, and I do suppose it would be best to respect their wishes. This is a problem that I hadn’t previously fully appreciated, so I would like to thank you for brining it up.
Luckily, I think I’ve finally figured out a way to adapt my ethical system to deal with this. That is, the adaptation will allow for agents to choose the extreme-discomfort-from-dust-specks option if that is what they wish for my my ethical system to respect their preferences. To do this, allow for the measure to satisfaction to include infinitesimals. Then, to respect the preferences of such agents, you just need need to pick the right satisfaction measure.
Consider the agent that for which each 50 years of torture causes a linear decrease in their utility function. For simplicity, imagine torture and discomfort are the only things the agent cares about; they have no other preferences; also assume that the agent dislike torture more than it dislikes discomfort, but only be a finite amount. Since the agent’s utility function/satisfaction measure is linear, I suppose being tortured for an eternity would be infinitely worse for the agent than being tortured for a finite amount of time. So, assign satisfaction 0 to the scenario in which the agent is tortured for eternity. And if the agent is instead tortured for n∈R years, let the agent’s satisfaction be 1−nϵ, where ϵ is whatever infinitesimal number you want. If my understanding of infinitesimals is correct, I think this will do what we want it to do in terms having agents using my ethical system respect the agent’s preferences.
Specifically, since being tortured forever would be infinitely worse than being tortured for a finite amount of time, any finite amount of torture would be accepted to decrease the chance of infinite torture. And this is what maximizing this satisfaction measure does: for any lottery, changing the chance of infinite torture has finite affect on expected satisfaction, whereas changing the chance of finite torture only has infinitesimal effect, so so avoiding infinite torture would be prioritized.
Further, among lotteries involving finite amounts of torture, it seems the ethical system using this satisfaction measure continues to do what what it’s supposed to do. For example, consider the choice between the previous two options:
A 0.5 chance of being tortured for 3^^^^3 years and a 0.5 chance of being fine.
A 0.5 chance of 3^^^^3 − 9999999 years of torture and 0.5 chance of being extraordinarily uncomfortable.
If I’m using my infinitesimal math right, the expected satisfaction of taking option 1 would be (0.5∗3↑↑↑↑3ϵ+0.5∗ϵ), and the expected satisfaction of taking option 2 would be 0.5∗(3↑↑↑↑3−9999999)ϵ∗0.5∗mϵ, for some m<<3↑↑↑↑3. Thus, to maximize this agent’s satisfaction measure, my moral system would indeed let the agent give infinite priority to avoiding infinite torture, the ethical system would itself consider the agent to get infinite torture infinitely-worse than getting finite torture, and would treat finite amounts of torture as decreasing satisfaction in a linear manner. And, since the utility measure is still technically bounded, it would still avoid the problem with utility monsters.
(In case it was unclear, ↑ is Knuth’s up-arrow notion, just like “^”.)
I think this framing muddies the intuition pump by introducing sadistic preferences, rather than focusing just on unboundedness below. I don’t think it’s necessary to do this: unboundedness below means there’s a sense in which everyone is a potential “negative utility monster” if you torture them long enough. I think the core issue here is whether there’s some point at which we just stop caring, or whether that’s morally repugnant.
Fair enough. So I’ll provide a non-sadistic scenario. Consider again the scenario I previously described in which you have a 0.5 chance of being tortured for 3^^^^3 years, but also have the repeated opportunity to cause yourself minor discomfort in the case of not being tortured and as a result get your possible torture sentence reduced by 50 years.
If you have an unbounded below utility function in which each 50 years causes a linear decrease in satisfaction or utility, then to maximize expected utility or life satisfaction, it seems you would need to opt for living in extreme discomfort in the non-torture scenario to decrease your possible torture time be an astronomically small proportion, provided the expectations are defined.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
Sorry, sloppy wording on my part. The question should have been “does this actually prevent us having a consistent preference ordering over gambles over universes” (even if we are not able to represent those preferences as maximising the expectation of a real-valued social welfare function)? We know (from lexicographic preferences) that “no-real-valued-utility-function-we-are-maximising-expectations-of” does not immediately imply “no-consistent-preference-ordering” (if we’re willing to accept orderings that violate continuity). So pointing to undefined expectations doesn’t seem to immediately rule out consistent choice.
Oh, I see. And yes, you can have consistent preference orderings that aren’t represented as a utility function. And such techniques have been proposed before in infinite ethics. For example, one of Bostrom’s proposals to deal with infinite ethics is the extended decision rule. Essentially, it says to first look at the set of actions you could take that would maximize P(infinite good) - P(infinite bad). If there is only one such action, take it. Otherwise, take whatever action among these that has highest expected moral value given a finite universe.
As far as I know, you can’t represent the above as a utility function, despite it being consistent.
However, the big problem with the above decision rule is that it suffers from the fanaticism problem: people would be willing to bear any finite cost, even 3^^^3 years of torture, to have even an unfathomably small chance of increasing the probability of infinite good or decreasing the probability of infinite bad. And this can get to pretty ridiculous levels. For example, suppose you are sure you can easily design a world that makes every creature happy and greatly increases the moral value of the world in a finite universe if implemented. However, you know that coming up with such a design would take one second of computation on your supercomputer, which means one less second to keep thinking about astronomically-improbable situations in which you could cause infinite good. Thus would have some minuscule chance of avoiding infinite good or causing infinite bad. Thus, you decide to not help anyone, because you won’t spare the one second of computer time.
More generally, I think the basic property of non-real-valued consistent preference orderings is that they value some things “infinitely more” than others. The issue is, if you really value some property infinitely more than some other property of lesser importance, it won’t be worth your time to even consider pursuing the property of lesser importance, because it’s always possible you could have used the extra computation to slightly increase your chances of getting the property of greater importance.
Fair. Intuitively though, this feels more like a rescaling of an underlying satisfaction measure than a plausible definition of satisfaction to me. That said, if you’re a preferentist, I accept this is internally consistent, and likely an improvement on alternative versions of preferentism.
Yes, and I am obviously not proposing a solution to this problem! More just suggesting that, if there are infinities in the problem that appear to correspond to actual things we care about, then defining them out of existence seems more like deprioritising the problem than solving it.
I think this framing muddies the intuition pump by introducing sadistic preferences, rather than focusing just on unboundedness below. I don’t think it’s necessary to do this: unboundedness below means there’s a sense in which everyone is a potential “negative utility monster” if you torture them long enough. I think the core issue here is whether there’s some point at which we just stop caring, or whether that’s morally repugnant.
Sorry, sloppy wording on my part. The question should have been “does this actually prevent us having a consistent preference ordering over gambles over universes” (even if we are not able to represent those preferences as maximising the expectation of a real-valued social welfare function)? We know (from lexicographic preferences) that “no-real-valued-utility-function-we-are-maximising-expectations-of” does not immediately imply “no-consistent-preference-ordering” (if we’re willing to accept orderings that violate continuity). So pointing to undefined expectations doesn’t seem to immediately rule out consistent choice.
Fair enough. So I’ll provide a non-sadistic scenario. Consider again the scenario I previously described in which you have a 0.5 chance of being tortured for 3^^^^3 years, but also have the repeated opportunity to cause yourself minor discomfort in the case of not being tortured and as a result get your possible torture sentence reduced by 50 years.
If you have an unbounded below utility function in which each 50 years causes a linear decrease in satisfaction or utility, then to maximize expected utility or life satisfaction, it seems you would need to opt for living in extreme discomfort in the non-torture scenario to decrease your possible torture time be an astronomically small proportion, provided the expectations are defined.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
Oh, I see. And yes, you can have consistent preference orderings that aren’t represented as a utility function. And such techniques have been proposed before in infinite ethics. For example, one of Bostrom’s proposals to deal with infinite ethics is the extended decision rule. Essentially, it says to first look at the set of actions you could take that would maximize P(infinite good) - P(infinite bad). If there is only one such action, take it. Otherwise, take whatever action among these that has highest expected moral value given a finite universe.
As far as I know, you can’t represent the above as a utility function, despite it being consistent.
However, the big problem with the above decision rule is that it suffers from the fanaticism problem: people would be willing to bear any finite cost, even 3^^^3 years of torture, to have even an unfathomably small chance of increasing the probability of infinite good or decreasing the probability of infinite bad. And this can get to pretty ridiculous levels. For example, suppose you are sure you can easily design a world that makes every creature happy and greatly increases the moral value of the world in a finite universe if implemented. However, you know that coming up with such a design would take one second of computation on your supercomputer, which means one less second to keep thinking about astronomically-improbable situations in which you could cause infinite good. Thus would have some minuscule chance of avoiding infinite good or causing infinite bad. Thus, you decide to not help anyone, because you won’t spare the 1 second of computer time.
More generally, I think the basic property of non-real-valued consistent preference orderings is that they value some things “infinitely more” than others. The issue is, if you really value some property infinitely more than some other property of lesser importance, it won’t be worth your time to even consider pursuing the property of lesser importance, because it’s always possible you could have used the extra computation to slightly increase your chances of getting the property of greater importance.
FWIW, this conclusion is not clear to me. To return to one of my original points: I don’t think you can dodge this objection by arguing from potentially idiosyncratic preferences, even perfectly reasonable ones; rather, you need it to be the case that no rational agent could have different preferences. Either that, or you need to be willing to override otherwise rational individual preferences when making interpersonal tradeoffs.
To be honest, I’m actually not entirely averse to the latter option: having interpersonal trade-offs determined by contingent individual risk-preferences has never seemed especially well-justified to me (particularly if probability is in the mind). But I confess it’s not clear whether that route is open to you, given the motivation for your system as a whole.
That makes sense, thanks.
Yes, that’s correct. It’s possible that there are some agents with consistent preferences that really would wish to get extraordinarily uncomfortable to avoid the torture. My point was just that this doesn’t seem like it would would be a common thing for agents to want.
Still, it is conceivable that there are at least a few agents out their that would consistently want to opt for the 0.5 chance of being extremely uncomfortable option, and I do suppose it would be best to respect their wishes. This is a problem that I hadn’t previously fully appreciated, so I would like to thank you for brining it up.
Luckily, I think I’ve finally figured out a way to adapt my ethical system to deal with this. That is, the adaptation will allow for agents to choose the extreme-discomfort-from-dust-specks option if that is what they wish for my my ethical system to respect their preferences. To do this, allow for the measure to satisfaction to include infinitesimals. Then, to respect the preferences of such agents, you just need need to pick the right satisfaction measure.
Consider the agent that for which each 50 years of torture causes a linear decrease in their utility function. For simplicity, imagine torture and discomfort are the only things the agent cares about; they have no other preferences; also assume that the agent dislike torture more than it dislikes discomfort, but only be a finite amount. Since the agent’s utility function/satisfaction measure is linear, I suppose being tortured for an eternity would be infinitely worse for the agent than being tortured for a finite amount of time. So, assign satisfaction 0 to the scenario in which the agent is tortured for eternity. And if the agent is instead tortured for n∈R years, let the agent’s satisfaction be 1−nϵ, where ϵ is whatever infinitesimal number you want. If my understanding of infinitesimals is correct, I think this will do what we want it to do in terms having agents using my ethical system respect the agent’s preferences.
Specifically, since being tortured forever would be infinitely worse than being tortured for a finite amount of time, any finite amount of torture would be accepted to decrease the chance of infinite torture. And this is what maximizing this satisfaction measure does: for any lottery, changing the chance of infinite torture has finite affect on expected satisfaction, whereas changing the chance of finite torture only has infinitesimal effect, so so avoiding infinite torture would be prioritized.
Further, among lotteries involving finite amounts of torture, it seems the ethical system using this satisfaction measure continues to do what what it’s supposed to do. For example, consider the choice between the previous two options:
A 0.5 chance of being tortured for 3^^^^3 years and a 0.5 chance of being fine.
A 0.5 chance of 3^^^^3 − 9999999 years of torture and 0.5 chance of being extraordinarily uncomfortable.
If I’m using my infinitesimal math right, the expected satisfaction of taking option 1 would be (0.5∗3↑↑↑↑3ϵ+0.5∗ϵ), and the expected satisfaction of taking option 2 would be 0.5∗(3↑↑↑↑3−9999999)ϵ∗0.5∗mϵ, for some m<<3↑↑↑↑3. Thus, to maximize this agent’s satisfaction measure, my moral system would indeed let the agent give infinite priority to avoiding infinite torture, the ethical system would itself consider the agent to get infinite torture infinitely-worse than getting finite torture, and would treat finite amounts of torture as decreasing satisfaction in a linear manner. And, since the utility measure is still technically bounded, it would still avoid the problem with utility monsters.
(In case it was unclear, ↑ is Knuth’s up-arrow notion, just like “^”.)
Fair enough. So I’ll provide a non-sadistic scenario. Consider again the scenario I previously described in which you have a 0.5 chance of being tortured for 3^^^^3 years, but also have the repeated opportunity to cause yourself minor discomfort in the case of not being tortured and as a result get your possible torture sentence reduced by 50 years.
If you have an unbounded below utility function in which each 50 years causes a linear decrease in satisfaction or utility, then to maximize expected utility or life satisfaction, it seems you would need to opt for living in extreme discomfort in the non-torture scenario to decrease your possible torture time be an astronomically small proportion, provided the expectations are defined.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
Oh, I see. And yes, you can have consistent preference orderings that aren’t represented as a utility function. And such techniques have been proposed before in infinite ethics. For example, one of Bostrom’s proposals to deal with infinite ethics is the extended decision rule. Essentially, it says to first look at the set of actions you could take that would maximize P(infinite good) - P(infinite bad). If there is only one such action, take it. Otherwise, take whatever action among these that has highest expected moral value given a finite universe.
As far as I know, you can’t represent the above as a utility function, despite it being consistent.
However, the big problem with the above decision rule is that it suffers from the fanaticism problem: people would be willing to bear any finite cost, even 3^^^3 years of torture, to have even an unfathomably small chance of increasing the probability of infinite good or decreasing the probability of infinite bad. And this can get to pretty ridiculous levels. For example, suppose you are sure you can easily design a world that makes every creature happy and greatly increases the moral value of the world in a finite universe if implemented. However, you know that coming up with such a design would take one second of computation on your supercomputer, which means one less second to keep thinking about astronomically-improbable situations in which you could cause infinite good. Thus would have some minuscule chance of avoiding infinite good or causing infinite bad. Thus, you decide to not help anyone, because you won’t spare the one second of computer time.
More generally, I think the basic property of non-real-valued consistent preference orderings is that they value some things “infinitely more” than others. The issue is, if you really value some property infinitely more than some other property of lesser importance, it won’t be worth your time to even consider pursuing the property of lesser importance, because it’s always possible you could have used the extra computation to slightly increase your chances of getting the property of greater importance.