My point was more that, even if you can calculate the expectation, standard versions of average utilitarianism are usually rejected for non-infinitarian reasons (e.g. the repugnant conclusion) that seem like they would plausibly carry over to this proposal as well.
If I understand correctly, average utilitarianism isn’t rejected due to the repugnant conclusion. In fact, it’s the opposite: the repugnant conclusion is a problem for total utilitarianism, and average utilitarianism is one way to avoid the problem. I’m just going off what I read on The Stanford Encyclopedia of Philosophy, but I don’t have particular reason to doubt what it says.
Separately, while I understand the technical reasons for imposing boundedness on the utility function, I think you probably also need a substantive argument for why boundedness makes sense, or at least is morally acceptable. Boundedness below risks having some pretty unappealing properties, I think.
Yes, I do think boundedness is essential for a utility function. The issue unbounded utility functions is that the expected value according to some probability distributions will be undefined. For example, if your utility follows a Cauchy distribution, then the expected utility is undefined.
Your actual probability distribution over utilities in an unbounded utility function wouldn’t exactly follow a Cauchy distribution. However, I think that for whatever reasonable probability distribution you would use in real life, an unbounded utility function have still have an undefined expected value.
To see why, note that there is a non-zero probability probability that your utility really will be sampled from a Cauchy distribution. For example, suppose you’re in some simulation run by aliens, and to determine your utility in your life after the simulation ends, they sample from the Cauchy distribution. (This is supposing that they’re powerful enough to give you any utility). I don’t have any completely conclusive evidence to rule out this possibility, so it has non-zero probability. It’s not clear to me why an alien would do the above, or that they would even have the power to, but I still have no way to rule it out with infinite confidence. So your expected utility, conditioning on being in this situation, would be undefined. As a result, you can prove that your total expected utility would also be undefined.
So it seems to me that the only way you can actually have your expected values be robustly well-defined is by having a bounded utility function.
Because the sigmoid function essentially saturates at very low levels of welfare, at some point you seem to end up in a perverse version of Torture vs. dust specks where you think it’s ok (or indeed required) to have 3^^^3 people (whose lives are already sufficiently terrible) horribly tortured for fifty years without hope or rest, to avoid someone in the middle of the welfare distribution getting a dust speck in their eye.
In principle, I do think this could occur. I agree that at first it intuitively seems undesirable. However, I’m not convinced it is, and I’m not convinced that there is a value system that avoids this without having even more undesirable results.
It’s important to note that the sufficiently terrible lives need to be really, really, really bad already. So much so that being horribly tortured for fifty years does almost exactly nothing to affect their overall satisfaction. For example, maybe they’re already being tortured for more than 3^^^^3 years, so adding fifty more years does almost exactly nothing to their life satisfaction.
Maybe it still seems to you that getting tortured for 50 more years would still be worse than getting a dust speck in the eye of an average person. However, if so, consider this scenario. You know you have a 50% chance of being tortured for more than 3^^^^3 years, and a 50% chance not being tortured and living in a regular world. However, you have have a choice: you can agree to get a very minor form of discomfort, like a dust speck in your eye in the case in which you aren’t tortured, and you will as a result tortured for 50 fewer years if you don’t end up in the situation in which you get tortured. So I suppose, given what you say, you would take it. But suppose your were given this opportuinty again. Well, you’d again be able to subtract 50 years of torture and get just a dust speck, so I guess you’d take it.
Imagine you’re allowed to repeat this process for an extremely long time. If you think that getting one dust speck is worth it to avoid 50 years of torture, then I think you would keep accepting one more dust speck until your eyes have as much dust in them as they possibly could. And then, once you’re done this this, you could go on to accepting some other extremely minor form of discomfort to avoid another 50 years of torture. Maybe you you start accepting an almost-exactly-imperceptible amount of back pain for another 50 years of torture reduction. And then continue this until your back, and the rest of your body parts, hurt quite a lot.
Here’s the result of your deals: you have a 50% chance of being incredibly uncomfortable. Your eyes are constantly blinded and heavily irritated by dust specs, and you feel a lot of pain all over your body. And you have a 50% chance of being horribly tortured for more than 3^^^^3 years. Note that even though you get your tortured sentence reduced by 50 * <number extremely minor discomforts you get> years, this results in the amount of time your spend tortured would decrease by a very, very, very, almost infinitesimal proportion.
Personally, I much rather have a 50% chance of being able to have a life that actually decent, even if it means that I won’t get to decrease the amount of time I’d spend possibly getting tortured by a near-infinitesimal proportion.
What if you still refuse? Well, the only way I can think of justifying your refusal is by having an unbounded utility function, so getting an extra 50 years of torture is around as bad as getting the first 50 years of torture. But as I’ve said, the expected values of unbounded utility functions seem to be undefined in reality, so this doesn’t seem like a good idea.
My point from the above is that getting one more dust speck in someone’s eye could in principle be better than having someone be tortured for 50 years, provided the tortured person would already have been tortured by a super-ultra-virtually-infinitely long time anyways.
It’s important to note that the sufficiently terrible lives need to be really, really, really bad already. So much so that being horribly tortured for fifty years does almost exactly nothing to affect their overall satisfaction. For example, maybe they’re already being tortured for more than 3^^^^3 years, so adding fifty more years does almost exactly nothing to their life satisfaction.
I realise now that I may have moved through a critical step of the argument quite quickly above, which may be why this quote doesn’t seem to capture the core of the objection I was trying to describe. Let me take another shot.
I am very much not suggesting that 50 years of torture does virtually nothing to [life satisfaction—or whatever other empirical value you want to take as axiologically primitive; happy to stick with life satisfaction as a running example]. I am suggesting that 50 years of torture is terrible for [life satisfaction]. I am then drawing a distinction between [life-satisfaction] and the output of the utility function that you then take expectations of. The reason I am doing this, is because it seems to me that whether [life satisfaction] is bounded is a contingent empirical question, not one that can be settled by normative fiat in order to make it easier to take expectations.
If, as a matter of empirical fact, [life satisfaction] is bounded, then the objection I describe will not bite.
If, on the other hand [life-satisfaction] is not bounded, then requiring the utility function you take expectations of to be bounded forces us to adopt some form of sigmoid mapping from [life satisfaction] to “utility”, and this in turn forces us, at some margin, to not care about things that are absolutely awful (from the perspective of [life satisfaction]). (If an extra 50 years of torture isn’t sufficient awful for some reason, then we just need to pick something more awful for the purposes of the argument).
Perhaps because I didn’t explain this very well the first time, what’s not totally clear to me from your response, is whether you think:
(a) [life satisfaction] is in fact bounded; or
(b) even if [life satisfaction] is unbounded, it’s actually ok to not care about stuff that is absolutely (infinitely?) awful from the perspective of [life-satisfaction] because it lets us take expectations more conveniently. [Intentionally provocative framing, sorry. Intended as an attempt to prompt genuine reflection, rather than to score rhetorical points.]
It’s possible that (a) is true, and much of your response seems like it’s probably (?) targeted at that claim, but FWIW, I don’t think this case can be convincingly made by appealing to contingent personal values: e.g. suggesting that another 50 years of torture wouldn’t much matter to you personally won’t escape the objection, as long as there’s a possible agent who would view their life-satisfaction as being materially reduced in the same circumstances.
Suggesting evolutionary bounds on satisfaction is another potential avenue of argument, but also feels too contingent to do what you really want.
Maybe you could make a case for (a) if you were to substitute a representation of individual preferences for [life satisfaction]? I’m personally disinclined towards preferences as moral primitives, particularly as they’re not unique, and consequently can’t deal with distributional issues, but YMMV.
ETA: An alternative (more promising?) approach could be to accept that, while it may not cover all possible choices, in practice we’re more likely to face choices with an infinite extensive margin than with an infinite intensive margin, and that the proposed method could be a reasonable decision rule for such choices. Practically, this seems like it would be acceptable as long as whatever function we’re using to map [life-satisfaction] into utility isn’t a sigmoid over the relevant range, and instead has a (weakly) negative second derivative over the (finite) range of [life satisfaction] covered by all relevant options.
(I assume (in)ability-to-take-expectations wasn’t intended as an argument for (a), as it doesn’t seem up to making such an empirical case?)
On the other hand, if you’re actually arguing for (b), then I guess that’s a bullet you can bite; though I think I’d still be trying to dodge it if I could. ETA: If there’s no alternative but to ignore infinities on either the intensive or extensive margin, I could accept choosing the intensive margin, but I’m inclined think this choice should be explicitly justified, and recognised as tragic if it really can’t be avoided.
It’s possible that (a) is true, and much of your response seems like it’s probably (?) targeted at that claim, but FWIW, I don’t think this case can be convincingly made by appealing to contingent personal values: e.g. suggesting that another 50 years of torture wouldn’t much matter to you personally won’t escape the objection, as long as there’s a possible agent who would view their life-satisfaction as being materially reduced in the same circumstances.
To some extent, whether or not life satisfaction is bounded just comes down to how you want to measure it. But it seems to me that any reasonable measure of life satisfaction really would be bounded.
I’ll clarify the measure of life satisfaction I had in mind. Imagine if you showed an agent finitely-many descriptions of situations they could end up being in, and asked the agent to pick out the worst and the best of all of them. Assign the worst scenario satisfaction 0 and the best scenario satisfaction 1. For any other outcome w set the satisfaction to p, where p is the probability in which the agent would be indifferent between getting satisfaction 1 with probability p and satisfaction 0 with probability 1 - p. This is very much like a certain technique for constructing a utility function from elicited preferences. So, according to my definition, life satisfaction is bounded by definition.
(You can also take the limit of the agent’s preferences as the number of described situations approaches infinite, if you want and if it converges. If it doesn’t, then you could instead just ask the agent about its preferences with infinitely-many scenarios and require the infimum of satisfactions to be 0 and the supremum to be 1. Also you might need to do something special to deal with agents with preferences that are inconsistent even given infinite reflection, but I don’t think this is particularly relevant to the discussion.)
Now, maybe you’re opposed to this measure. However, if you reject it, I think you have a pretty big problem you need to deal with: utility monsters.
To quote Wikipedia:
A hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.
If you have some agents with unbounded measures satisfaction, then I think that would imply you would need to be willing cause arbitrary large amounts of suffering of agents with bounded satisfaction in order to increase the satisfaction of a utility monster as much as possible.
This seems pretty horrible to me, so I’m satisfied with keeping the measure of life satisfaction to be bounded.
In principle, you could have utility monster-like creatures in my ethical system, too. Perhaps all the agents other than the monster really have very little in the way of preferences, and so their life satisfaction doesn’t change much at all by you helping them. Then you could potentially give resources to the monster. However, the effect of “utility monsters” is much more limited in my ethical system, and it’s an effect that doesn’t seem intuitively undesirable to me. Unlike if you had an unbounded satisfaction measure, my ethical system doesn’t allow a single agent to cause arbitrarily large amounts of suffering to arbitrarily large numbers of other agents.
Further, suppose you do decide to have an unbounded measure of life satisfaction and aggregate it to allow even a finite universe to have arbitrarily high or low moral value. Then the expected moral values of the world would be undefined, just like how to expected value of unbounded utility functions are undefined. Specifically, just consider having a Cauchy distribution over the moral value of the universe. Such a distribution has no expected value. So, if you’re trying to maximize the expected moral value of the universe, you won’t be able to. And, as a moral agent, what else are you supposed to do?
Also, I want to mention that there’s a trivial case in which you could avoid having my ethical system torture the agent for 50 years. Specifically, maybe there’s some certain 50 years that decreases the agent’s life satisfaction a lot, even though the other 50 years don’t. For example, maybe the agent dreads the idea of having more than a million years of torture, so specifically adding those last 50 years would be a problem. But I’m guessing you aren’t worrying about this specific case.
I’ll clarify the measure of life satisfaction I had in mind. Imagine if you showed an agent finitely-many descriptions of situations they could end up being in, and asked the agent to pick out the worst and the best of all of them. Assign the worst scenario satisfaction 0 and the best scenario satisfaction 1.
Thanks. I’ve toyed with similar ideas perviously myself. The advantage, if this sort of thing works, is that it conveniently avoids a major issue with preference-based measures: that they’re not unique and therefore incomparable across individuals. However, this method seems fragile in relying on a finite number of scenarios: doesn’t it break if it’s possible to imagine something worse than whatever the currently worst scenario is? (E.g. just keep adding 50 more years of torture.) While this might be a reasonable approximation in some circumstances, it doesn’t seem like a fully coherent solution to me.
This seems pretty horrible to me, so I’m satisfied with keeping the measure of life satisfaction to be bounded.
IMO, the problem highlighted by the utility monster objection is fundamentally a prioritiarian one. A transformation that guarantees boundedness above seems capable of resolving this, without requiring boundedness below (and thus avoiding the problematic consequences that boundedness below introduces).
Further, suppose you do decide to have an unbounded measure of life satisfaction
Given issues with the methodology proposed above for constructing bounded satisfaction functions, it’s still not entirely clear to me that this is really a decision, as opposed to an empirical question (which we then need to decide how to cope with from a normative perspective). This seems like it may be a key difference in our perspectives here.
So, if you’re trying to maximize the expected moral value of the universe, you won’t be able to. And, as a moral agent, what else are you supposed to do?
Well, in general terms the answer to this question has to be either (a) bite a bullet, or (b) find another solution that avoids the uncomfortable trade-offs. It seems to me that you’ll be willing to bite most bullets here. (Though I confess it’s actually a little hard for me to tell whether you’re also denying that there’s any meaningful tradeoff here; that case still strikes me as less plausible.) If so, that’s fine, but I hope you’ll understand why to some of us that might feel less like a solution to the issue of infinities, than a decision to just not worry about them on a particular dimension. Perhaps that’s ultimately necessary, but it’s definitely non-ideal from my perspective.
A final random thought/question: I get that we can’t expected utility maximise unless we can take finite expectations, but does this actually prevent us having a consistent preference ordering over universes, or is it potentially just a representation issue? I would have guessed that the vNM axiom we’re violating here is continuity, which I tend to think of as a convenience assumption rather than an actual rationality requirement. (E.g. there’s not really anything substantively crazy about lexicographic preferences as far as I can tell, they’re just mathematically inconvenient to represent with real numbers.) Conflating a lack of real-valued representations with lack of consistent preference orderings is a fairly common mistake in this space. That said, if it were just really just a representation issue, I would have expected someone smarter than me to have noticed by now, so (in lieu of actually checking) I’m assigning that low probability for now.
Also, in addition to my previous response, I want to note that the issues with unbounded satisfaction measures are not unique to my infinite ethical system. Instead, they are common potential problems with a wide variety of aggregate consequentialist theories.
For example, imagine suppose your a classical utilitarianism with an unbounded utility measure per person. And suppose you know that the universe is finite will consist of a single inhabitant with a utility whose probability distributions follows a Cauchy distribution. Then your expected utilities are undefined, despite the universe being knowably finite.
Similarly, imagine if you again used classical utilitarianism but instead you have a finite universe with one utility monster and 3^^^3 regular people. Then, if your expected utilities are defined, you would need to give the utility monster what it wants, to the expense of of everyone else.
So, I don’t think your concern about keeping utility functions bounded is unwarranted; I’m just noting that they are part of a broader issue with aggregate consequentialism, not just with my ethical system.
So, I don’t think your concern about keeping utility functions bounded is unwarranted; I’m just noting that they are part of a broader issue with aggregate consequentialism, not just with my ethical system.
Thanks. I’ve toyed with similar ideas perviously myself. The advantage, if this sort of thing works, is that it conveniently avoids a major issue with preference-based measures: that they’re not unique and therefore incomparable across individuals. However, this method seems fragile in relying on a finite number of scenarios: doesn’t it break if it’s possible to imagine something worse than whatever the currently worst scenario is? (E.g. just keep adding 50 more years of torture.) While this might be a reasonable approximation in some circumstances, it doesn’t seem like a fully coherent solution to me.
As I said, you can allow for infinitely-many scenarios if you want; you just need to make it so the supremum of them their value is 1 and the infimum is 0. That is, imagine there’s an infinite sequence of scenarios you can come up with, each of which is worse than the last. Then just require that the infimum of the satisfaction of those sequences is 0. That way, as you consider worse and worse scenarios, the satisfaction continues to decrease, but never gets below 0.
IMO, the problem highlighted by the utility monster objection is fundamentally a prioritiarian one. A transformation that guarantees boundedness above seems capable of resolving this, without requiring boundedness below (and thus avoiding the problematic consequences that boundedness below introduces).
One issue with only having boundedness above is that is that the expected of life satisfaction for an arbitrary agent would probably often be undefined or −∞ in expectation. For example, consider if an agent had a probability distribution like a Cauchy distribution, except that it assigns probability 0 to anything about the maximize level of satisfaction, and is then renormalized to have probabilities sum to 1. If I’m doing my calculus right, the resulting probability distribution’s expected value doesn’t converge. You could either interpret this as the expected utility being undefined or being −∞, since the Rienmann sum approaches −∞ as the width of the column approaches zero.
That said, even if the expectations are defined, it doesn’t seem to me that keeping the satisfaction measure bounded above but not bellow would solve the problem of utility monsters. To see why, imagine a new utility monster as follows. The utility monster feels an incredibly strong need to have everyone on Earth be tortured. For the next hundred years, its satisfaction will will decrease by 3^^^3 for every second there’s someone on Earth not being tortured. Thus, assuming the expectations converge, the moral thing to do, according to maximizing average, total, or expected-value-conditioning-on-being-in-this-universe life satisfaction is to torture everyone. This is a problem both in finite and infinite cases.
A final random thought/question: I get that we can’t expected utility maximise unless we can take finite expectations, but does this actually prevent us having a consistent preference ordering over universes, or is it potentially just a representation issue?
If I understand what you’re asking correctly, you can indeed have consistent preferences over universes, even if you don’t have a bounded utility function. The issue is, in order to act, you need more than just a consistent preference order over possible universe. In reality, you only get to choose between probability distributions over possible worlds, not specific possible worlds. And this, with an unbounded utility function, will tend to result in undefined expected utilities over possible actions and thus is not informative of what action you should take. Which is the whole point of utility theory and ethics.
Now, according to some probability distributions, can have well-defined expected values even with an unbounded utility function. But, as I said, this is not robust, and I think that in practice expected values of an unbounded utility function would be undefined.
you just need to make it so the supremum of them their value is 1 and the infimum is 0.
Fair. Intuitively though, this feels more like a rescaling of an underlying satisfaction measure than a plausible definition of satisfaction to me. That said, if you’re a preferentist, I accept this is internally consistent, and likely an improvement on alternative versions of preferentism.
One issue with only having boundedness above is that is that the expected of life satisfaction for an arbitrary agent would probably often be undefined or −∞ in expectation
Yes, and I am obviously not proposing a solution to this problem! More just suggesting that, if there are infinities in the problem that appear to correspond to actual things we care about, then defining them out of existence seems more like deprioritising the problem than solving it.
The utility monster feels an incredibly strong need to have everyone on Earth be tortured
I think this framing muddies the intuition pump by introducing sadistic preferences, rather than focusing just on unboundedness below. I don’t think it’s necessary to do this: unboundedness below means there’s a sense in which everyone is a potential “negative utility monster” if you torture them long enough. I think the core issue here is whether there’s some point at which we just stop caring, or whether that’s morally repugnant.
in order to act, you need more than just a consistent preference order over possible universe. In reality, you only get to choose between probability distributions over possible worlds, not specific possible worlds
Sorry, sloppy wording on my part. The question should have been “does this actually prevent us having a consistent preference ordering over gambles over universes” (even if we are not able to represent those preferences as maximising the expectation of a real-valued social welfare function)? We know (from lexicographic preferences) that “no-real-valued-utility-function-we-are-maximising-expectations-of” does not immediately imply “no-consistent-preference-ordering” (if we’re willing to accept orderings that violate continuity). So pointing to undefined expectations doesn’t seem to immediately rule out consistent choice.
I think this framing muddies the intuition pump by introducing sadistic preferences, rather than focusing just on unboundedness below. I don’t think it’s necessary to do this: unboundedness below means there’s a sense in which everyone is a potential “negative utility monster” if you torture them long enough. I think the core issue here is whether there’s some point at which we just stop caring, or whether that’s morally repugnant.
Fair enough. So I’ll provide a non-sadistic scenario. Consider again the scenario I previously described in which you have a 0.5 chance of being tortured for 3^^^^3 years, but also have the repeated opportunity to cause yourself minor discomfort in the case of not being tortured and as a result get your possible torture sentence reduced by 50 years.
If you have an unbounded below utility function in which each 50 years causes a linear decrease in satisfaction or utility, then to maximize expected utility or life satisfaction, it seems you would need to opt for living in extreme discomfort in the non-torture scenario to decrease your possible torture time be an astronomically small proportion, provided the expectations are defined.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
Sorry, sloppy wording on my part. The question should have been “does this actually prevent us having a consistent preference ordering over gambles over universes” (even if we are not able to represent those preferences as maximising the expectation of a real-valued social welfare function)? We know (from lexicographic preferences) that “no-real-valued-utility-function-we-are-maximising-expectations-of” does not immediately imply “no-consistent-preference-ordering” (if we’re willing to accept orderings that violate continuity). So pointing to undefined expectations doesn’t seem to immediately rule out consistent choice.
Oh, I see. And yes, you can have consistent preference orderings that aren’t represented as a utility function. And such techniques have been proposed before in infinite ethics. For example, one of Bostrom’s proposals to deal with infinite ethics is the extended decision rule. Essentially, it says to first look at the set of actions you could take that would maximize P(infinite good) - P(infinite bad). If there is only one such action, take it. Otherwise, take whatever action among these that has highest expected moral value given a finite universe.
As far as I know, you can’t represent the above as a utility function, despite it being consistent.
However, the big problem with the above decision rule is that it suffers from the fanaticism problem: people would be willing to bear any finite cost, even 3^^^3 years of torture, to have even an unfathomably small chance of increasing the probability of infinite good or decreasing the probability of infinite bad. And this can get to pretty ridiculous levels. For example, suppose you are sure you can easily design a world that makes every creature happy and greatly increases the moral value of the world in a finite universe if implemented. However, you know that coming up with such a design would take one second of computation on your supercomputer, which means one less second to keep thinking about astronomically-improbable situations in which you could cause infinite good. Thus would have some minuscule chance of avoiding infinite good or causing infinite bad. Thus, you decide to not help anyone, because you won’t spare the 1 second of computer time.
More generally, I think the basic property of non-real-valued consistent preference orderings is that they value some things “infinitely more” than others. The issue is, if you really value some property infinitely more than some other property of lesser importance, it won’t be worth your time to even consider pursuing the property of lesser importance, because it’s always possible you could have used the extra computation to slightly increase your chances of getting the property of greater importance.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
FWIW, this conclusion is not clear to me. To return to one of my original points: I don’t think you can dodge this objection by arguing from potentially idiosyncratic preferences, even perfectly reasonable ones; rather, you need it to be the case that no rational agent could have different preferences. Either that, or you need to be willing to override otherwise rational individual preferences when making interpersonal tradeoffs.
To be honest, I’m actually not entirely averse to the latter option: having interpersonal trade-offs determined by contingent individual risk-preferences has never seemed especially well-justified to me (particularly if probability is in the mind). But I confess it’s not clear whether that route is open to you, given the motivation for your system as a whole.
More generally, I think the basic property of non-real-valued consistent preference orderings is that they value some things “infinitely more” than others.
FWIW, this conclusion is not clear to me. To return to one of my original points: I don’t think you can dodge this objection by arguing from potentially idiosyncratic preferences, even perfectly reasonable ones; rather, you need it to be the case that no rational agent could have different preferences. Either that, or you need to be willing to override otherwise rational individual preferences when making interpersonal tradeoffs.
Yes, that’s correct. It’s possible that there are some agents with consistent preferences that really would wish to get extraordinarily uncomfortable to avoid the torture. My point was just that this doesn’t seem like it would would be a common thing for agents to want.
Still, it is conceivable that there are at least a few agents out their that would consistently want to opt for the 0.5 chance of being extremely uncomfortable option, and I do suppose it would be best to respect their wishes. This is a problem that I hadn’t previously fully appreciated, so I would like to thank you for brining it up.
Luckily, I think I’ve finally figured out a way to adapt my ethical system to deal with this. That is, the adaptation will allow for agents to choose the extreme-discomfort-from-dust-specks option if that is what they wish for my my ethical system to respect their preferences. To do this, allow for the measure to satisfaction to include infinitesimals. Then, to respect the preferences of such agents, you just need need to pick the right satisfaction measure.
Consider the agent that for which each 50 years of torture causes a linear decrease in their utility function. For simplicity, imagine torture and discomfort are the only things the agent cares about; they have no other preferences; also assume that the agent dislike torture more than it dislikes discomfort, but only be a finite amount. Since the agent’s utility function/satisfaction measure is linear, I suppose being tortured for an eternity would be infinitely worse for the agent than being tortured for a finite amount of time. So, assign satisfaction 0 to the scenario in which the agent is tortured for eternity. And if the agent is instead tortured for n∈R years, let the agent’s satisfaction be 1−nϵ, where ϵ is whatever infinitesimal number you want. If my understanding of infinitesimals is correct, I think this will do what we want it to do in terms having agents using my ethical system respect the agent’s preferences.
Specifically, since being tortured forever would be infinitely worse than being tortured for a finite amount of time, any finite amount of torture would be accepted to decrease the chance of infinite torture. And this is what maximizing this satisfaction measure does: for any lottery, changing the chance of infinite torture has finite affect on expected satisfaction, whereas changing the chance of finite torture only has infinitesimal effect, so so avoiding infinite torture would be prioritized.
Further, among lotteries involving finite amounts of torture, it seems the ethical system using this satisfaction measure continues to do what what it’s supposed to do. For example, consider the choice between the previous two options:
A 0.5 chance of being tortured for 3^^^^3 years and a 0.5 chance of being fine.
A 0.5 chance of 3^^^^3 − 9999999 years of torture and 0.5 chance of being extraordinarily uncomfortable.
If I’m using my infinitesimal math right, the expected satisfaction of taking option 1 would be (0.5∗3↑↑↑↑3ϵ+0.5∗ϵ), and the expected satisfaction of taking option 2 would be 0.5∗(3↑↑↑↑3−9999999)ϵ∗0.5∗mϵ, for some m<<3↑↑↑↑3. Thus, to maximize this agent’s satisfaction measure, my moral system would indeed let the agent give infinite priority to avoiding infinite torture, the ethical system would itself consider the agent to get infinite torture infinitely-worse than getting finite torture, and would treat finite amounts of torture as decreasing satisfaction in a linear manner. And, since the utility measure is still technically bounded, it would still avoid the problem with utility monsters.
(In case it was unclear, ↑ is Knuth’s up-arrow notion, just like “^”.)
I think this framing muddies the intuition pump by introducing sadistic preferences, rather than focusing just on unboundedness below. I don’t think it’s necessary to do this: unboundedness below means there’s a sense in which everyone is a potential “negative utility monster” if you torture them long enough. I think the core issue here is whether there’s some point at which we just stop caring, or whether that’s morally repugnant.
Fair enough. So I’ll provide a non-sadistic scenario. Consider again the scenario I previously described in which you have a 0.5 chance of being tortured for 3^^^^3 years, but also have the repeated opportunity to cause yourself minor discomfort in the case of not being tortured and as a result get your possible torture sentence reduced by 50 years.
If you have an unbounded below utility function in which each 50 years causes a linear decrease in satisfaction or utility, then to maximize expected utility or life satisfaction, it seems you would need to opt for living in extreme discomfort in the non-torture scenario to decrease your possible torture time be an astronomically small proportion, provided the expectations are defined.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
Sorry, sloppy wording on my part. The question should have been “does this actually prevent us having a consistent preference ordering over gambles over universes” (even if we are not able to represent those preferences as maximising the expectation of a real-valued social welfare function)? We know (from lexicographic preferences) that “no-real-valued-utility-function-we-are-maximising-expectations-of” does not immediately imply “no-consistent-preference-ordering” (if we’re willing to accept orderings that violate continuity). So pointing to undefined expectations doesn’t seem to immediately rule out consistent choice.
Oh, I see. And yes, you can have consistent preference orderings that aren’t represented as a utility function. And such techniques have been proposed before in infinite ethics. For example, one of Bostrom’s proposals to deal with infinite ethics is the extended decision rule. Essentially, it says to first look at the set of actions you could take that would maximize P(infinite good) - P(infinite bad). If there is only one such action, take it. Otherwise, take whatever action among these that has highest expected moral value given a finite universe.
As far as I know, you can’t represent the above as a utility function, despite it being consistent.
However, the big problem with the above decision rule is that it suffers from the fanaticism problem: people would be willing to bear any finite cost, even 3^^^3 years of torture, to have even an unfathomably small chance of increasing the probability of infinite good or decreasing the probability of infinite bad. And this can get to pretty ridiculous levels. For example, suppose you are sure you can easily design a world that makes every creature happy and greatly increases the moral value of the world in a finite universe if implemented. However, you know that coming up with such a design would take one second of computation on your supercomputer, which means one less second to keep thinking about astronomically-improbable situations in which you could cause infinite good. Thus would have some minuscule chance of avoiding infinite good or causing infinite bad. Thus, you decide to not help anyone, because you won’t spare the one second of computer time.
More generally, I think the basic property of non-real-valued consistent preference orderings is that they value some things “infinitely more” than others. The issue is, if you really value some property infinitely more than some other property of lesser importance, it won’t be worth your time to even consider pursuing the property of lesser importance, because it’s always possible you could have used the extra computation to slightly increase your chances of getting the property of greater importance.
Re the repugnant conclusion: apologies for the lazy/incorrect example. Let me try again with better illustrations of the same underlying point. To be clear, I am not suggesting these are knock-down arguments; just that, given widespread (non-infinitarian) rejection of average utilitarianisms, you probably want to think through whether your view suffers from the same issues and whether you are ok with that.
Though there’s a huge literature on all of this, a decent starting point is here:
However, the average view has very little support among moral philosophers since it suffers from severe problems.
First, consider a world inhabited by a single person enduring excruciating suffering. The average view entails that we could improve this world by creating a million new people whose lives were also filled with excruciating suffering if the suffering of the new people was ever-so-slightly less bad than the suffering of the original person.26
Second, the average view entails the sadistic conclusion: It can sometimes be better to create lives with negative wellbeing than to create lives with positive wellbeing from the same starting point, all else equal.
Adding a small number of tortured, miserable people to a population diminishes the average wellbeing less than adding a sufficiently large number of people whose lives are pretty good, yet below the existing average...
Third, the average view prefers arbitrarily small populations over very large populations, as long as the average wellbeing was higher. For example, a world with a single, extremely happy individual would be favored to a world with ten billion people, all of whom are extremely happy but just ever-so-slightly less happy than that single person.
Third, the average view prefers arbitrarily small populations over very large populations, as long as the average wellbeing was higher. For example, a world with a single, extremely happy individual would be favored to a world with ten billion people, all of whom are extremely happy but just ever-so-slightly less happy than that single person.
In an infinite universe, there’s already infinitely-many people, so I don’t think this applies to my infinite ethical system.
First, consider a world inhabited by a single person enduring excruciating suffering. The average view entails that we could improve this world by creating a million new people whose lives were also filled with excruciating suffering if the suffering of the new people was ever-so-slightly less bad than the suffering of the original person.
Second, the average view entails the sadistic conclusion: It can sometimes be better to create lives with negative wellbeing than to create lives with positive wellbeing from the same starting point, all else equal.
In a finite universe, I can see why those verdicts would be undesirable. But in an infinite universe, there’s already infinitely-many people at all levels of suffering. So, according to my own moral intuition at least, it doesn’t seem that these are bad verdicts.
You might have differing moral intuitions, and that’s fine. If you do have an issue with this, you could potentially modify my ethical system to make it an analogue of total utilitarianism. Specifically, consider the probability distribution something would have if it conditions on it ending up somewhere in this universe, but doesn’t even know if it will be an actual agent with preferences or not.That is, it uses some prior that allows for the possibility that of ending up as a preference-free rock or something. Also, make sure the measure of life satisfaction treats existences with neutral welfare and the existences of things without preferences as zero. Now, simply modify my system to maximize the expected value of life satisfaction, given this prior. That’s my total-utilitarianism-infinite-analog ethical system.
So, to give an example of how this works, consider the situation in which you can torture one person to avoid creating a large number of people with pretty decent lives. Well, the large number of people with pretty decent lives would increase the moral value of the world, because creating those people makes it more likely that a prior that something would end up as an agent with positive life satisfaction rather than some inanimate object, conditioning only on being something in this universe. But adding a tortured creature would only decrease the moral value of the universe. Thus, this total-utilitarian-infinite-analogue ethical system would prefer create the large number of people with decent lives than to tortured one creature.
Of course, if you accept this system, then you have to a way to deal with the repugnant conclusion, just like you need to find a way to deal with it using regular total utilitarian in a finite universe. I’ve yet to see any satisfactory solution to the repugnant conclusion. But if there is one, I bet you could extend it to this total-utilitarian-infinite-analogue ethical system. This is because because this ethical system is a lot like regular total utilitarian, except it replaces, “total number of creatures with satisfaction x” with “total probability mass of ending up as a creature with satisfaction x”.
Given the lack of a satisfactory solution to the repugnant conclusion, I prefer the idea of just sticking with my average-utilitarianism-like infinite ethical system. But I can see why you might have different preferences.
In an infinite universe, there’s already infinitely-many people, so I don’t think this applies to my infinite ethical system.
YMMV, but FWIW allowing a system of infinite ethics to get finite questions (which should just be a special case) wrong seems a very non-ideal property to me, and suggests something has gone wrong somewhere. Is it really never possible to reach a state where all remaining choices have only finite implications?
For the record, according to my intuitions, average consequentialism seems perfectly fine to me in a finite universe.
That said, if you don’t like using average consequentialism in a finite case, I don’t personally see what’s wrong with just having a somewhat different ethical system for finite cases. I know it seems ad-hoc, but I think there really is an important distinction between finite and infinite scenarios. Specifically, people have the moral intuition that larger numbers of satisfied lives are more valuable than smaller numbers of them, which average utilitarianism conflicts with. But in an infinite universe, you can’t change the total amount of satisfaction or dissatisfaction.
But, if you want, you could combine both the finite ethical system and infinite ethical system so that a single principle is used for moral deliberation. This might make it feel less ad-hocy. For example, you could have a moral value function that of the form, f(total amount of satisfaction and dissatisfaction in the universe) * expected value of life satisfaction for an arbitrary agent in this universe. And let f be some bounded function that’s maximized by ∞ and approaches this value very slowly.
For those who don’t want this, they are free to use my total-utilitarian-infinite-ethical system. I think that it just ends up as regular total utilitarian in a finite world, or close to it.
If I understand correctly, average utilitarianism isn’t rejected due to the repugnant conclusion. In fact, it’s the opposite: the repugnant conclusion is a problem for total utilitarianism, and average utilitarianism is one way to avoid the problem. I’m just going off what I read on The Stanford Encyclopedia of Philosophy, but I don’t have particular reason to doubt what it says.
Yes, I do think boundedness is essential for a utility function. The issue unbounded utility functions is that the expected value according to some probability distributions will be undefined. For example, if your utility follows a Cauchy distribution, then the expected utility is undefined.
Your actual probability distribution over utilities in an unbounded utility function wouldn’t exactly follow a Cauchy distribution. However, I think that for whatever reasonable probability distribution you would use in real life, an unbounded utility function have still have an undefined expected value.
To see why, note that there is a non-zero probability probability that your utility really will be sampled from a Cauchy distribution. For example, suppose you’re in some simulation run by aliens, and to determine your utility in your life after the simulation ends, they sample from the Cauchy distribution. (This is supposing that they’re powerful enough to give you any utility). I don’t have any completely conclusive evidence to rule out this possibility, so it has non-zero probability. It’s not clear to me why an alien would do the above, or that they would even have the power to, but I still have no way to rule it out with infinite confidence. So your expected utility, conditioning on being in this situation, would be undefined. As a result, you can prove that your total expected utility would also be undefined.
So it seems to me that the only way you can actually have your expected values be robustly well-defined is by having a bounded utility function.
In principle, I do think this could occur. I agree that at first it intuitively seems undesirable. However, I’m not convinced it is, and I’m not convinced that there is a value system that avoids this without having even more undesirable results.
It’s important to note that the sufficiently terrible lives need to be really, really, really bad already. So much so that being horribly tortured for fifty years does almost exactly nothing to affect their overall satisfaction. For example, maybe they’re already being tortured for more than 3^^^^3 years, so adding fifty more years does almost exactly nothing to their life satisfaction.
Maybe it still seems to you that getting tortured for 50 more years would still be worse than getting a dust speck in the eye of an average person. However, if so, consider this scenario. You know you have a 50% chance of being tortured for more than 3^^^^3 years, and a 50% chance not being tortured and living in a regular world. However, you have have a choice: you can agree to get a very minor form of discomfort, like a dust speck in your eye in the case in which you aren’t tortured, and you will as a result tortured for 50 fewer years if you don’t end up in the situation in which you get tortured. So I suppose, given what you say, you would take it. But suppose your were given this opportuinty again. Well, you’d again be able to subtract 50 years of torture and get just a dust speck, so I guess you’d take it.
Imagine you’re allowed to repeat this process for an extremely long time. If you think that getting one dust speck is worth it to avoid 50 years of torture, then I think you would keep accepting one more dust speck until your eyes have as much dust in them as they possibly could. And then, once you’re done this this, you could go on to accepting some other extremely minor form of discomfort to avoid another 50 years of torture. Maybe you you start accepting an almost-exactly-imperceptible amount of back pain for another 50 years of torture reduction. And then continue this until your back, and the rest of your body parts, hurt quite a lot.
Here’s the result of your deals: you have a 50% chance of being incredibly uncomfortable. Your eyes are constantly blinded and heavily irritated by dust specs, and you feel a lot of pain all over your body. And you have a 50% chance of being horribly tortured for more than 3^^^^3 years. Note that even though you get your tortured sentence reduced by 50 * <number extremely minor discomforts you get> years, this results in the amount of time your spend tortured would decrease by a very, very, very, almost infinitesimal proportion.
Personally, I much rather have a 50% chance of being able to have a life that actually decent, even if it means that I won’t get to decrease the amount of time I’d spend possibly getting tortured by a near-infinitesimal proportion.
What if you still refuse? Well, the only way I can think of justifying your refusal is by having an unbounded utility function, so getting an extra 50 years of torture is around as bad as getting the first 50 years of torture. But as I’ve said, the expected values of unbounded utility functions seem to be undefined in reality, so this doesn’t seem like a good idea.
My point from the above is that getting one more dust speck in someone’s eye could in principle be better than having someone be tortured for 50 years, provided the tortured person would already have been tortured by a super-ultra-virtually-infinitely long time anyways.
Re boundedness:
I realise now that I may have moved through a critical step of the argument quite quickly above, which may be why this quote doesn’t seem to capture the core of the objection I was trying to describe. Let me take another shot.
I am very much not suggesting that 50 years of torture does virtually nothing to [life satisfaction—or whatever other empirical value you want to take as axiologically primitive; happy to stick with life satisfaction as a running example]. I am suggesting that 50 years of torture is terrible for [life satisfaction]. I am then drawing a distinction between [life-satisfaction] and the output of the utility function that you then take expectations of. The reason I am doing this, is because it seems to me that whether [life satisfaction] is bounded is a contingent empirical question, not one that can be settled by normative fiat in order to make it easier to take expectations.
If, as a matter of empirical fact, [life satisfaction] is bounded, then the objection I describe will not bite.
If, on the other hand [life-satisfaction] is not bounded, then requiring the utility function you take expectations of to be bounded forces us to adopt some form of sigmoid mapping from [life satisfaction] to “utility”, and this in turn forces us, at some margin, to not care about things that are absolutely awful (from the perspective of [life satisfaction]). (If an extra 50 years of torture isn’t sufficient awful for some reason, then we just need to pick something more awful for the purposes of the argument).
Perhaps because I didn’t explain this very well the first time, what’s not totally clear to me from your response, is whether you think:
(a) [life satisfaction] is in fact bounded; or
(b) even if [life satisfaction] is unbounded, it’s actually ok to not care about stuff that is absolutely (infinitely?) awful from the perspective of [life-satisfaction] because it lets us take expectations more conveniently. [Intentionally provocative framing, sorry. Intended as an attempt to prompt genuine reflection, rather than to score rhetorical points.]
It’s possible that (a) is true, and much of your response seems like it’s probably (?) targeted at that claim, but FWIW, I don’t think this case can be convincingly made by appealing to contingent personal values: e.g. suggesting that another 50 years of torture wouldn’t much matter to you personally won’t escape the objection, as long as there’s a possible agent who would view their life-satisfaction as being materially reduced in the same circumstances.
Suggesting evolutionary bounds on satisfaction is another potential avenue of argument, but also feels too contingent to do what you really want.
Maybe you could make a case for (a) if you were to substitute a representation of individual preferences for [life satisfaction]? I’m personally disinclined towards preferences as moral primitives, particularly as they’re not unique, and consequently can’t deal with distributional issues, but YMMV.
ETA: An alternative (more promising?) approach could be to accept that, while it may not cover all possible choices, in practice we’re more likely to face choices with an infinite extensive margin than with an infinite intensive margin, and that the proposed method could be a reasonable decision rule for such choices. Practically, this seems like it would be acceptable as long as whatever function we’re using to map [life-satisfaction] into utility isn’t a sigmoid over the relevant range, and instead has a (weakly) negative second derivative over the (finite) range of [life satisfaction] covered by all relevant options.
(I assume (in)ability-to-take-expectations wasn’t intended as an argument for (a), as it doesn’t seem up to making such an empirical case?)
On the other hand, if you’re actually arguing for (b), then I guess that’s a bullet you can bite; though I think I’d still be trying to dodge it if I could. ETA: If there’s no alternative but to ignore infinities on either the intensive or extensive margin, I could accept choosing the intensive margin, but I’m inclined think this choice should be explicitly justified, and recognised as tragic if it really can’t be avoided.
To some extent, whether or not life satisfaction is bounded just comes down to how you want to measure it. But it seems to me that any reasonable measure of life satisfaction really would be bounded.
I’ll clarify the measure of life satisfaction I had in mind. Imagine if you showed an agent finitely-many descriptions of situations they could end up being in, and asked the agent to pick out the worst and the best of all of them. Assign the worst scenario satisfaction 0 and the best scenario satisfaction 1. For any other outcome w set the satisfaction to p, where p is the probability in which the agent would be indifferent between getting satisfaction 1 with probability p and satisfaction 0 with probability 1 - p. This is very much like a certain technique for constructing a utility function from elicited preferences. So, according to my definition, life satisfaction is bounded by definition.
(You can also take the limit of the agent’s preferences as the number of described situations approaches infinite, if you want and if it converges. If it doesn’t, then you could instead just ask the agent about its preferences with infinitely-many scenarios and require the infimum of satisfactions to be 0 and the supremum to be 1. Also you might need to do something special to deal with agents with preferences that are inconsistent even given infinite reflection, but I don’t think this is particularly relevant to the discussion.)
Now, maybe you’re opposed to this measure. However, if you reject it, I think you have a pretty big problem you need to deal with: utility monsters.
To quote Wikipedia:
If you have some agents with unbounded measures satisfaction, then I think that would imply you would need to be willing cause arbitrary large amounts of suffering of agents with bounded satisfaction in order to increase the satisfaction of a utility monster as much as possible.
This seems pretty horrible to me, so I’m satisfied with keeping the measure of life satisfaction to be bounded.
In principle, you could have utility monster-like creatures in my ethical system, too. Perhaps all the agents other than the monster really have very little in the way of preferences, and so their life satisfaction doesn’t change much at all by you helping them. Then you could potentially give resources to the monster. However, the effect of “utility monsters” is much more limited in my ethical system, and it’s an effect that doesn’t seem intuitively undesirable to me. Unlike if you had an unbounded satisfaction measure, my ethical system doesn’t allow a single agent to cause arbitrarily large amounts of suffering to arbitrarily large numbers of other agents.
Further, suppose you do decide to have an unbounded measure of life satisfaction and aggregate it to allow even a finite universe to have arbitrarily high or low moral value. Then the expected moral values of the world would be undefined, just like how to expected value of unbounded utility functions are undefined. Specifically, just consider having a Cauchy distribution over the moral value of the universe. Such a distribution has no expected value. So, if you’re trying to maximize the expected moral value of the universe, you won’t be able to. And, as a moral agent, what else are you supposed to do?
Also, I want to mention that there’s a trivial case in which you could avoid having my ethical system torture the agent for 50 years. Specifically, maybe there’s some certain 50 years that decreases the agent’s life satisfaction a lot, even though the other 50 years don’t. For example, maybe the agent dreads the idea of having more than a million years of torture, so specifically adding those last 50 years would be a problem. But I’m guessing you aren’t worrying about this specific case.
Thanks. I’ve toyed with similar ideas perviously myself. The advantage, if this sort of thing works, is that it conveniently avoids a major issue with preference-based measures: that they’re not unique and therefore incomparable across individuals. However, this method seems fragile in relying on a finite number of scenarios: doesn’t it break if it’s possible to imagine something worse than whatever the currently worst scenario is? (E.g. just keep adding 50 more years of torture.) While this might be a reasonable approximation in some circumstances, it doesn’t seem like a fully coherent solution to me.
IMO, the problem highlighted by the utility monster objection is fundamentally a prioritiarian one. A transformation that guarantees boundedness above seems capable of resolving this, without requiring boundedness below (and thus avoiding the problematic consequences that boundedness below introduces).
Given issues with the methodology proposed above for constructing bounded satisfaction functions, it’s still not entirely clear to me that this is really a decision, as opposed to an empirical question (which we then need to decide how to cope with from a normative perspective). This seems like it may be a key difference in our perspectives here.
Well, in general terms the answer to this question has to be either (a) bite a bullet, or (b) find another solution that avoids the uncomfortable trade-offs. It seems to me that you’ll be willing to bite most bullets here. (Though I confess it’s actually a little hard for me to tell whether you’re also denying that there’s any meaningful tradeoff here; that case still strikes me as less plausible.) If so, that’s fine, but I hope you’ll understand why to some of us that might feel less like a solution to the issue of infinities, than a decision to just not worry about them on a particular dimension. Perhaps that’s ultimately necessary, but it’s definitely non-ideal from my perspective.
A final random thought/question: I get that we can’t expected utility maximise unless we can take finite expectations, but does this actually prevent us having a consistent preference ordering over universes, or is it potentially just a representation issue? I would have guessed that the vNM axiom we’re violating here is continuity, which I tend to think of as a convenience assumption rather than an actual rationality requirement. (E.g. there’s not really anything substantively crazy about lexicographic preferences as far as I can tell, they’re just mathematically inconvenient to represent with real numbers.) Conflating a lack of real-valued representations with lack of consistent preference orderings is a fairly common mistake in this space. That said, if it were just really just a representation issue, I would have expected someone smarter than me to have noticed by now, so (in lieu of actually checking) I’m assigning that low probability for now.
Also, in addition to my previous response, I want to note that the issues with unbounded satisfaction measures are not unique to my infinite ethical system. Instead, they are common potential problems with a wide variety of aggregate consequentialist theories.
For example, imagine suppose your a classical utilitarianism with an unbounded utility measure per person. And suppose you know that the universe is finite will consist of a single inhabitant with a utility whose probability distributions follows a Cauchy distribution. Then your expected utilities are undefined, despite the universe being knowably finite.
Similarly, imagine if you again used classical utilitarianism but instead you have a finite universe with one utility monster and 3^^^3 regular people. Then, if your expected utilities are defined, you would need to give the utility monster what it wants, to the expense of of everyone else.
So, I don’t think your concern about keeping utility functions bounded is unwarranted; I’m just noting that they are part of a broader issue with aggregate consequentialism, not just with my ethical system.
Agreed!
As I said, you can allow for infinitely-many scenarios if you want; you just need to make it so the supremum of them their value is 1 and the infimum is 0. That is, imagine there’s an infinite sequence of scenarios you can come up with, each of which is worse than the last. Then just require that the infimum of the satisfaction of those sequences is 0. That way, as you consider worse and worse scenarios, the satisfaction continues to decrease, but never gets below 0.
One issue with only having boundedness above is that is that the expected of life satisfaction for an arbitrary agent would probably often be undefined or −∞ in expectation. For example, consider if an agent had a probability distribution like a Cauchy distribution, except that it assigns probability 0 to anything about the maximize level of satisfaction, and is then renormalized to have probabilities sum to 1. If I’m doing my calculus right, the resulting probability distribution’s expected value doesn’t converge. You could either interpret this as the expected utility being undefined or being −∞, since the Rienmann sum approaches −∞ as the width of the column approaches zero.
That said, even if the expectations are defined, it doesn’t seem to me that keeping the satisfaction measure bounded above but not bellow would solve the problem of utility monsters. To see why, imagine a new utility monster as follows. The utility monster feels an incredibly strong need to have everyone on Earth be tortured. For the next hundred years, its satisfaction will will decrease by 3^^^3 for every second there’s someone on Earth not being tortured. Thus, assuming the expectations converge, the moral thing to do, according to maximizing average, total, or expected-value-conditioning-on-being-in-this-universe life satisfaction is to torture everyone. This is a problem both in finite and infinite cases.
If I understand what you’re asking correctly, you can indeed have consistent preferences over universes, even if you don’t have a bounded utility function. The issue is, in order to act, you need more than just a consistent preference order over possible universe. In reality, you only get to choose between probability distributions over possible worlds, not specific possible worlds. And this, with an unbounded utility function, will tend to result in undefined expected utilities over possible actions and thus is not informative of what action you should take. Which is the whole point of utility theory and ethics.
Now, according to some probability distributions, can have well-defined expected values even with an unbounded utility function. But, as I said, this is not robust, and I think that in practice expected values of an unbounded utility function would be undefined.
Fair. Intuitively though, this feels more like a rescaling of an underlying satisfaction measure than a plausible definition of satisfaction to me. That said, if you’re a preferentist, I accept this is internally consistent, and likely an improvement on alternative versions of preferentism.
Yes, and I am obviously not proposing a solution to this problem! More just suggesting that, if there are infinities in the problem that appear to correspond to actual things we care about, then defining them out of existence seems more like deprioritising the problem than solving it.
I think this framing muddies the intuition pump by introducing sadistic preferences, rather than focusing just on unboundedness below. I don’t think it’s necessary to do this: unboundedness below means there’s a sense in which everyone is a potential “negative utility monster” if you torture them long enough. I think the core issue here is whether there’s some point at which we just stop caring, or whether that’s morally repugnant.
Sorry, sloppy wording on my part. The question should have been “does this actually prevent us having a consistent preference ordering over gambles over universes” (even if we are not able to represent those preferences as maximising the expectation of a real-valued social welfare function)? We know (from lexicographic preferences) that “no-real-valued-utility-function-we-are-maximising-expectations-of” does not immediately imply “no-consistent-preference-ordering” (if we’re willing to accept orderings that violate continuity). So pointing to undefined expectations doesn’t seem to immediately rule out consistent choice.
Fair enough. So I’ll provide a non-sadistic scenario. Consider again the scenario I previously described in which you have a 0.5 chance of being tortured for 3^^^^3 years, but also have the repeated opportunity to cause yourself minor discomfort in the case of not being tortured and as a result get your possible torture sentence reduced by 50 years.
If you have an unbounded below utility function in which each 50 years causes a linear decrease in satisfaction or utility, then to maximize expected utility or life satisfaction, it seems you would need to opt for living in extreme discomfort in the non-torture scenario to decrease your possible torture time be an astronomically small proportion, provided the expectations are defined.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
Oh, I see. And yes, you can have consistent preference orderings that aren’t represented as a utility function. And such techniques have been proposed before in infinite ethics. For example, one of Bostrom’s proposals to deal with infinite ethics is the extended decision rule. Essentially, it says to first look at the set of actions you could take that would maximize P(infinite good) - P(infinite bad). If there is only one such action, take it. Otherwise, take whatever action among these that has highest expected moral value given a finite universe.
As far as I know, you can’t represent the above as a utility function, despite it being consistent.
However, the big problem with the above decision rule is that it suffers from the fanaticism problem: people would be willing to bear any finite cost, even 3^^^3 years of torture, to have even an unfathomably small chance of increasing the probability of infinite good or decreasing the probability of infinite bad. And this can get to pretty ridiculous levels. For example, suppose you are sure you can easily design a world that makes every creature happy and greatly increases the moral value of the world in a finite universe if implemented. However, you know that coming up with such a design would take one second of computation on your supercomputer, which means one less second to keep thinking about astronomically-improbable situations in which you could cause infinite good. Thus would have some minuscule chance of avoiding infinite good or causing infinite bad. Thus, you decide to not help anyone, because you won’t spare the 1 second of computer time.
More generally, I think the basic property of non-real-valued consistent preference orderings is that they value some things “infinitely more” than others. The issue is, if you really value some property infinitely more than some other property of lesser importance, it won’t be worth your time to even consider pursuing the property of lesser importance, because it’s always possible you could have used the extra computation to slightly increase your chances of getting the property of greater importance.
FWIW, this conclusion is not clear to me. To return to one of my original points: I don’t think you can dodge this objection by arguing from potentially idiosyncratic preferences, even perfectly reasonable ones; rather, you need it to be the case that no rational agent could have different preferences. Either that, or you need to be willing to override otherwise rational individual preferences when making interpersonal tradeoffs.
To be honest, I’m actually not entirely averse to the latter option: having interpersonal trade-offs determined by contingent individual risk-preferences has never seemed especially well-justified to me (particularly if probability is in the mind). But I confess it’s not clear whether that route is open to you, given the motivation for your system as a whole.
That makes sense, thanks.
Yes, that’s correct. It’s possible that there are some agents with consistent preferences that really would wish to get extraordinarily uncomfortable to avoid the torture. My point was just that this doesn’t seem like it would would be a common thing for agents to want.
Still, it is conceivable that there are at least a few agents out their that would consistently want to opt for the 0.5 chance of being extremely uncomfortable option, and I do suppose it would be best to respect their wishes. This is a problem that I hadn’t previously fully appreciated, so I would like to thank you for brining it up.
Luckily, I think I’ve finally figured out a way to adapt my ethical system to deal with this. That is, the adaptation will allow for agents to choose the extreme-discomfort-from-dust-specks option if that is what they wish for my my ethical system to respect their preferences. To do this, allow for the measure to satisfaction to include infinitesimals. Then, to respect the preferences of such agents, you just need need to pick the right satisfaction measure.
Consider the agent that for which each 50 years of torture causes a linear decrease in their utility function. For simplicity, imagine torture and discomfort are the only things the agent cares about; they have no other preferences; also assume that the agent dislike torture more than it dislikes discomfort, but only be a finite amount. Since the agent’s utility function/satisfaction measure is linear, I suppose being tortured for an eternity would be infinitely worse for the agent than being tortured for a finite amount of time. So, assign satisfaction 0 to the scenario in which the agent is tortured for eternity. And if the agent is instead tortured for n∈R years, let the agent’s satisfaction be 1−nϵ, where ϵ is whatever infinitesimal number you want. If my understanding of infinitesimals is correct, I think this will do what we want it to do in terms having agents using my ethical system respect the agent’s preferences.
Specifically, since being tortured forever would be infinitely worse than being tortured for a finite amount of time, any finite amount of torture would be accepted to decrease the chance of infinite torture. And this is what maximizing this satisfaction measure does: for any lottery, changing the chance of infinite torture has finite affect on expected satisfaction, whereas changing the chance of finite torture only has infinitesimal effect, so so avoiding infinite torture would be prioritized.
Further, among lotteries involving finite amounts of torture, it seems the ethical system using this satisfaction measure continues to do what what it’s supposed to do. For example, consider the choice between the previous two options:
A 0.5 chance of being tortured for 3^^^^3 years and a 0.5 chance of being fine.
A 0.5 chance of 3^^^^3 − 9999999 years of torture and 0.5 chance of being extraordinarily uncomfortable.
If I’m using my infinitesimal math right, the expected satisfaction of taking option 1 would be (0.5∗3↑↑↑↑3ϵ+0.5∗ϵ), and the expected satisfaction of taking option 2 would be 0.5∗(3↑↑↑↑3−9999999)ϵ∗0.5∗mϵ, for some m<<3↑↑↑↑3. Thus, to maximize this agent’s satisfaction measure, my moral system would indeed let the agent give infinite priority to avoiding infinite torture, the ethical system would itself consider the agent to get infinite torture infinitely-worse than getting finite torture, and would treat finite amounts of torture as decreasing satisfaction in a linear manner. And, since the utility measure is still technically bounded, it would still avoid the problem with utility monsters.
(In case it was unclear, ↑ is Knuth’s up-arrow notion, just like “^”.)
Fair enough. So I’ll provide a non-sadistic scenario. Consider again the scenario I previously described in which you have a 0.5 chance of being tortured for 3^^^^3 years, but also have the repeated opportunity to cause yourself minor discomfort in the case of not being tortured and as a result get your possible torture sentence reduced by 50 years.
If you have an unbounded below utility function in which each 50 years causes a linear decrease in satisfaction or utility, then to maximize expected utility or life satisfaction, it seems you would need to opt for living in extreme discomfort in the non-torture scenario to decrease your possible torture time be an astronomically small proportion, provided the expectations are defined.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
Oh, I see. And yes, you can have consistent preference orderings that aren’t represented as a utility function. And such techniques have been proposed before in infinite ethics. For example, one of Bostrom’s proposals to deal with infinite ethics is the extended decision rule. Essentially, it says to first look at the set of actions you could take that would maximize P(infinite good) - P(infinite bad). If there is only one such action, take it. Otherwise, take whatever action among these that has highest expected moral value given a finite universe.
As far as I know, you can’t represent the above as a utility function, despite it being consistent.
However, the big problem with the above decision rule is that it suffers from the fanaticism problem: people would be willing to bear any finite cost, even 3^^^3 years of torture, to have even an unfathomably small chance of increasing the probability of infinite good or decreasing the probability of infinite bad. And this can get to pretty ridiculous levels. For example, suppose you are sure you can easily design a world that makes every creature happy and greatly increases the moral value of the world in a finite universe if implemented. However, you know that coming up with such a design would take one second of computation on your supercomputer, which means one less second to keep thinking about astronomically-improbable situations in which you could cause infinite good. Thus would have some minuscule chance of avoiding infinite good or causing infinite bad. Thus, you decide to not help anyone, because you won’t spare the one second of computer time.
More generally, I think the basic property of non-real-valued consistent preference orderings is that they value some things “infinitely more” than others. The issue is, if you really value some property infinitely more than some other property of lesser importance, it won’t be worth your time to even consider pursuing the property of lesser importance, because it’s always possible you could have used the extra computation to slightly increase your chances of getting the property of greater importance.
Re the repugnant conclusion: apologies for the lazy/incorrect example. Let me try again with better illustrations of the same underlying point. To be clear, I am not suggesting these are knock-down arguments; just that, given widespread (non-infinitarian) rejection of average utilitarianisms, you probably want to think through whether your view suffers from the same issues and whether you are ok with that.
Though there’s a huge literature on all of this, a decent starting point is here:
Thanks for the response.
In an infinite universe, there’s already infinitely-many people, so I don’t think this applies to my infinite ethical system.
In a finite universe, I can see why those verdicts would be undesirable. But in an infinite universe, there’s already infinitely-many people at all levels of suffering. So, according to my own moral intuition at least, it doesn’t seem that these are bad verdicts.
You might have differing moral intuitions, and that’s fine. If you do have an issue with this, you could potentially modify my ethical system to make it an analogue of total utilitarianism. Specifically, consider the probability distribution something would have if it conditions on it ending up somewhere in this universe, but doesn’t even know if it will be an actual agent with preferences or not.That is, it uses some prior that allows for the possibility that of ending up as a preference-free rock or something. Also, make sure the measure of life satisfaction treats existences with neutral welfare and the existences of things without preferences as zero. Now, simply modify my system to maximize the expected value of life satisfaction, given this prior. That’s my total-utilitarianism-infinite-analog ethical system.
So, to give an example of how this works, consider the situation in which you can torture one person to avoid creating a large number of people with pretty decent lives. Well, the large number of people with pretty decent lives would increase the moral value of the world, because creating those people makes it more likely that a prior that something would end up as an agent with positive life satisfaction rather than some inanimate object, conditioning only on being something in this universe. But adding a tortured creature would only decrease the moral value of the universe. Thus, this total-utilitarian-infinite-analogue ethical system would prefer create the large number of people with decent lives than to tortured one creature.
Of course, if you accept this system, then you have to a way to deal with the repugnant conclusion, just like you need to find a way to deal with it using regular total utilitarian in a finite universe. I’ve yet to see any satisfactory solution to the repugnant conclusion. But if there is one, I bet you could extend it to this total-utilitarian-infinite-analogue ethical system. This is because because this ethical system is a lot like regular total utilitarian, except it replaces, “total number of creatures with satisfaction x” with “total probability mass of ending up as a creature with satisfaction x”.
Given the lack of a satisfactory solution to the repugnant conclusion, I prefer the idea of just sticking with my average-utilitarianism-like infinite ethical system. But I can see why you might have different preferences.
YMMV, but FWIW allowing a system of infinite ethics to get finite questions (which should just be a special case) wrong seems a very non-ideal property to me, and suggests something has gone wrong somewhere. Is it really never possible to reach a state where all remaining choices have only finite implications?
For the record, according to my intuitions, average consequentialism seems perfectly fine to me in a finite universe.
That said, if you don’t like using average consequentialism in a finite case, I don’t personally see what’s wrong with just having a somewhat different ethical system for finite cases. I know it seems ad-hoc, but I think there really is an important distinction between finite and infinite scenarios. Specifically, people have the moral intuition that larger numbers of satisfied lives are more valuable than smaller numbers of them, which average utilitarianism conflicts with. But in an infinite universe, you can’t change the total amount of satisfaction or dissatisfaction.
But, if you want, you could combine both the finite ethical system and infinite ethical system so that a single principle is used for moral deliberation. This might make it feel less ad-hocy. For example, you could have a moral value function that of the form, f(total amount of satisfaction and dissatisfaction in the universe) * expected value of life satisfaction for an arbitrary agent in this universe. And let f be some bounded function that’s maximized by ∞ and approaches this value very slowly.
For those who don’t want this, they are free to use my total-utilitarian-infinite-ethical system. I think that it just ends up as regular total utilitarian in a finite world, or close to it.