Third, the average view prefers arbitrarily small populations over very large populations, as long as the average wellbeing was higher. For example, a world with a single, extremely happy individual would be favored to a world with ten billion people, all of whom are extremely happy but just ever-so-slightly less happy than that single person.
In an infinite universe, there’s already infinitely-many people, so I don’t think this applies to my infinite ethical system.
First, consider a world inhabited by a single person enduring excruciating suffering. The average view entails that we could improve this world by creating a million new people whose lives were also filled with excruciating suffering if the suffering of the new people was ever-so-slightly less bad than the suffering of the original person.
Second, the average view entails the sadistic conclusion: It can sometimes be better to create lives with negative wellbeing than to create lives with positive wellbeing from the same starting point, all else equal.
In a finite universe, I can see why those verdicts would be undesirable. But in an infinite universe, there’s already infinitely-many people at all levels of suffering. So, according to my own moral intuition at least, it doesn’t seem that these are bad verdicts.
You might have differing moral intuitions, and that’s fine. If you do have an issue with this, you could potentially modify my ethical system to make it an analogue of total utilitarianism. Specifically, consider the probability distribution something would have if it conditions on it ending up somewhere in this universe, but doesn’t even know if it will be an actual agent with preferences or not.That is, it uses some prior that allows for the possibility that of ending up as a preference-free rock or something. Also, make sure the measure of life satisfaction treats existences with neutral welfare and the existences of things without preferences as zero. Now, simply modify my system to maximize the expected value of life satisfaction, given this prior. That’s my total-utilitarianism-infinite-analog ethical system.
So, to give an example of how this works, consider the situation in which you can torture one person to avoid creating a large number of people with pretty decent lives. Well, the large number of people with pretty decent lives would increase the moral value of the world, because creating those people makes it more likely that a prior that something would end up as an agent with positive life satisfaction rather than some inanimate object, conditioning only on being something in this universe. But adding a tortured creature would only decrease the moral value of the universe. Thus, this total-utilitarian-infinite-analogue ethical system would prefer create the large number of people with decent lives than to tortured one creature.
Of course, if you accept this system, then you have to a way to deal with the repugnant conclusion, just like you need to find a way to deal with it using regular total utilitarian in a finite universe. I’ve yet to see any satisfactory solution to the repugnant conclusion. But if there is one, I bet you could extend it to this total-utilitarian-infinite-analogue ethical system. This is because because this ethical system is a lot like regular total utilitarian, except it replaces, “total number of creatures with satisfaction x” with “total probability mass of ending up as a creature with satisfaction x”.
Given the lack of a satisfactory solution to the repugnant conclusion, I prefer the idea of just sticking with my average-utilitarianism-like infinite ethical system. But I can see why you might have different preferences.
In an infinite universe, there’s already infinitely-many people, so I don’t think this applies to my infinite ethical system.
YMMV, but FWIW allowing a system of infinite ethics to get finite questions (which should just be a special case) wrong seems a very non-ideal property to me, and suggests something has gone wrong somewhere. Is it really never possible to reach a state where all remaining choices have only finite implications?
For the record, according to my intuitions, average consequentialism seems perfectly fine to me in a finite universe.
That said, if you don’t like using average consequentialism in a finite case, I don’t personally see what’s wrong with just having a somewhat different ethical system for finite cases. I know it seems ad-hoc, but I think there really is an important distinction between finite and infinite scenarios. Specifically, people have the moral intuition that larger numbers of satisfied lives are more valuable than smaller numbers of them, which average utilitarianism conflicts with. But in an infinite universe, you can’t change the total amount of satisfaction or dissatisfaction.
But, if you want, you could combine both the finite ethical system and infinite ethical system so that a single principle is used for moral deliberation. This might make it feel less ad-hocy. For example, you could have a moral value function that of the form, f(total amount of satisfaction and dissatisfaction in the universe) * expected value of life satisfaction for an arbitrary agent in this universe. And let f be some bounded function that’s maximized by ∞ and approaches this value very slowly.
For those who don’t want this, they are free to use my total-utilitarian-infinite-ethical system. I think that it just ends up as regular total utilitarian in a finite world, or close to it.
Thanks for the response.
In an infinite universe, there’s already infinitely-many people, so I don’t think this applies to my infinite ethical system.
In a finite universe, I can see why those verdicts would be undesirable. But in an infinite universe, there’s already infinitely-many people at all levels of suffering. So, according to my own moral intuition at least, it doesn’t seem that these are bad verdicts.
You might have differing moral intuitions, and that’s fine. If you do have an issue with this, you could potentially modify my ethical system to make it an analogue of total utilitarianism. Specifically, consider the probability distribution something would have if it conditions on it ending up somewhere in this universe, but doesn’t even know if it will be an actual agent with preferences or not.That is, it uses some prior that allows for the possibility that of ending up as a preference-free rock or something. Also, make sure the measure of life satisfaction treats existences with neutral welfare and the existences of things without preferences as zero. Now, simply modify my system to maximize the expected value of life satisfaction, given this prior. That’s my total-utilitarianism-infinite-analog ethical system.
So, to give an example of how this works, consider the situation in which you can torture one person to avoid creating a large number of people with pretty decent lives. Well, the large number of people with pretty decent lives would increase the moral value of the world, because creating those people makes it more likely that a prior that something would end up as an agent with positive life satisfaction rather than some inanimate object, conditioning only on being something in this universe. But adding a tortured creature would only decrease the moral value of the universe. Thus, this total-utilitarian-infinite-analogue ethical system would prefer create the large number of people with decent lives than to tortured one creature.
Of course, if you accept this system, then you have to a way to deal with the repugnant conclusion, just like you need to find a way to deal with it using regular total utilitarian in a finite universe. I’ve yet to see any satisfactory solution to the repugnant conclusion. But if there is one, I bet you could extend it to this total-utilitarian-infinite-analogue ethical system. This is because because this ethical system is a lot like regular total utilitarian, except it replaces, “total number of creatures with satisfaction x” with “total probability mass of ending up as a creature with satisfaction x”.
Given the lack of a satisfactory solution to the repugnant conclusion, I prefer the idea of just sticking with my average-utilitarianism-like infinite ethical system. But I can see why you might have different preferences.
YMMV, but FWIW allowing a system of infinite ethics to get finite questions (which should just be a special case) wrong seems a very non-ideal property to me, and suggests something has gone wrong somewhere. Is it really never possible to reach a state where all remaining choices have only finite implications?
For the record, according to my intuitions, average consequentialism seems perfectly fine to me in a finite universe.
That said, if you don’t like using average consequentialism in a finite case, I don’t personally see what’s wrong with just having a somewhat different ethical system for finite cases. I know it seems ad-hoc, but I think there really is an important distinction between finite and infinite scenarios. Specifically, people have the moral intuition that larger numbers of satisfied lives are more valuable than smaller numbers of them, which average utilitarianism conflicts with. But in an infinite universe, you can’t change the total amount of satisfaction or dissatisfaction.
But, if you want, you could combine both the finite ethical system and infinite ethical system so that a single principle is used for moral deliberation. This might make it feel less ad-hocy. For example, you could have a moral value function that of the form, f(total amount of satisfaction and dissatisfaction in the universe) * expected value of life satisfaction for an arbitrary agent in this universe. And let f be some bounded function that’s maximized by ∞ and approaches this value very slowly.
For those who don’t want this, they are free to use my total-utilitarian-infinite-ethical system. I think that it just ends up as regular total utilitarian in a finite world, or close to it.