If this means what it appears to, that takes you outside the realm of welfarist axiologies—in which case Arrhenius’ impossibility results don’t even apply.
My understanding is that they would apply even stronger—more extraneous factors you add, the more likely you are to get one or more of the impossibility results (eg you can get the sadist conclusion in “total utilitarianism + intrinsic value for art”).
However, this view can lie well within the realm of welfarist. Only personal preferences need be counted (the welfarist assumption makes no requirement about how they be aggregated).
When I said “non-personal preferences”, I did not put the cut directly at the welfarist-non-welfarist boundary. But anyway, here are some things I suspect would make it into my finale population ethics:
Some value to more equal systems (also between groups).
Some value to diversity of existing beings.
In contrast, some very broad set of criteria that encompass “the human condition” and possibly reduced value to entities outside that.
Some small value to cultural entities such as cultures and tradition groups (though this may be contained within diversity).
Some small value to the continued existence of certain human practices that seem worthwhile (eg making stories, valuing truth to some extent).
Some intrinsic value to the “individual liberties” of today.
Possibly some value to continued society progress.
Avoidance of the repugnant conclusion.
Asymmetry between birth and death.
As you can see, it’s far from a well formed whole, yet! Most of these (apart from the birth-death asymmetry, not rep concl, and possibly diversity) I don’t hold particularly strongly, and would just prefer some accommodation. ie if most people are part of the same huge, happy monoculture, a much smaller number of people in alternate cultures (with the possibility of others joining them) would be fine.
I think trying to be clinical is useful in terms of slicing up the space of interacting factors and discovering which are actually driving our intuitions, and which merely get swept up along the way and tainted by association because of the examples we first thought of.
There definitely a need for clinical detachment, but I think there’s also a time for emotional engagement. Decomposing our intuitions may sometimes just destroy them for no good reason (eg how bureaucracies can commit atrocities because responsibility is diffused, and no individual component is doing something strongly wrong).
My understanding is that they would apply even stronger—more extraneous factors you add, the more likely you are to get one or more of the impossibility results (eg you can get the sadist conclusion in “total utilitarianism + intrinsic value for art”).
I think you’re confused here. The impossibility result is the theorem that says you get one of these apparently undesirable conclusions. It’s a theorem about the class of axiologies which are welfarist (so depend only on personal welfare levels). If you look at a wider class of axiologies, the theorem doesn’t apply. Of course some of them may additionally get one of these conclusions as well. It’s also possible that we could extend the theorem to a slightly larger class.
Here’s another, equivalent, statement of the theorem:
Any system of population ethics does at least one of the following:
Mixed systems (welfarist+other stuff) still fall prey to the argument. You can see this by holding “other stuff” constant, and considering the choices between populations that differ only in welfare. Or you can allow “other stuff” to vary, which makes it easy to violate more of the six conditions (the Dominance principle, the Addition principle, the Minimal Non-Extreme Priority principle, the Repugnant conclusion, the Sadistic conclusion, and the Anti-Egalitarian conclusion), maybe violating them all.
It doesn’t seem clear that it is always possible to keep “other stuff” constant when varying welfare?
I guess I don’t see how you’re defining mixed systems. My first version makes any axiology at all “mixed”, since you can just take the reliance on welfare to be trivial (which is a trivial example of a welfarist system).
If you have broader theorems about violating this set of conditions, they would be very interesting to know about.
Actually I’m not sure the anti-egalitarian conclusion is even well-formed for non-welfarist systems. You can look at welfare levels (if you think those exist) to get what looks like a form of the conclusion, but then we might say that what looks like it’s anti-egalitarian is not better because of the less equal arrangement of welfare, but for some other, non-welfare, reasons. Which doesn’t seem necessarily pathological (if you are happy with non-welfare reasons entering in).
It seems to me that the most defective assumption is that the well being of the whole is some linear combination of individual preferences, up to very large numbers, and in very atypical circumstances (e.g. involving copies).
You can’t expect that you could divide the universe into arbitrarily small cubes (10cm? 1cm? 1mm? 1nm?) and then measure each cube’s preferences or quality of life and sum those.
So, on the purely logical grounds, summing can not be the general rule that you can apply to all sorts of living things, including the living thing that is 3cm by 3cm by 3cm cube within your head.
I thus don’t see any good reason to keep privileging this assumption.
If you are a philosopher, and you want your paper to look scientific, then you need mathematical symbols and you need to do some algebra so it looks like something worthwhile is going on. In which case, by all means, go on and assume summation, this will help write a paper.
But if you are interested in studying actual human ethics, it is clear that we evaluate the whole in non-linear ways—we have to do that to as much as recognize an ethical value of a human despite not recognizing an ethical value of a quark—we don’t think that any individual quarks within a human feel pain, but we think that the whole does.
Conversely, for a very large number of pedophiles, we do recognize that an individual pedophile wants to watch child porn, but we do not sacrifice a child, however large the number of pedos is. And it is not any more incoherent than recognizing that an individual person has preferences but the elementary particles do not.
My understanding is that they would apply even stronger—more extraneous factors you add, the more likely you are to get one or more of the impossibility results (eg you can get the sadist conclusion in “total utilitarianism + intrinsic value for art”).
When I said “non-personal preferences”, I did not put the cut directly at the welfarist-non-welfarist boundary. But anyway, here are some things I suspect would make it into my finale population ethics:
Some value to more equal systems (also between groups).
Some value to diversity of existing beings.
In contrast, some very broad set of criteria that encompass “the human condition” and possibly reduced value to entities outside that.
Some small value to cultural entities such as cultures and tradition groups (though this may be contained within diversity).
Some small value to the continued existence of certain human practices that seem worthwhile (eg making stories, valuing truth to some extent).
Some intrinsic value to the “individual liberties” of today.
Possibly some value to continued society progress.
Avoidance of the repugnant conclusion.
Asymmetry between birth and death.
As you can see, it’s far from a well formed whole, yet! Most of these (apart from the birth-death asymmetry, not rep concl, and possibly diversity) I don’t hold particularly strongly, and would just prefer some accommodation. ie if most people are part of the same huge, happy monoculture, a much smaller number of people in alternate cultures (with the possibility of others joining them) would be fine.
There definitely a need for clinical detachment, but I think there’s also a time for emotional engagement. Decomposing our intuitions may sometimes just destroy them for no good reason (eg how bureaucracies can commit atrocities because responsibility is diffused, and no individual component is doing something strongly wrong).
I think you’re confused here. The impossibility result is the theorem that says you get one of these apparently undesirable conclusions. It’s a theorem about the class of axiologies which are welfarist (so depend only on personal welfare levels). If you look at a wider class of axiologies, the theorem doesn’t apply. Of course some of them may additionally get one of these conclusions as well. It’s also possible that we could extend the theorem to a slightly larger class.
Here’s another, equivalent, statement of the theorem:
Any system of population ethics does at least one of the following:
Embraces the repugnant conclusion;
Embraces the sadistic conclusion;
Embraces the anti-egalitarian conclusion;
Is not welfarist.
Mixed systems (welfarist+other stuff) still fall prey to the argument. You can see this by holding “other stuff” constant, and considering the choices between populations that differ only in welfare. Or you can allow “other stuff” to vary, which makes it easy to violate more of the six conditions (the Dominance principle, the Addition principle, the Minimal Non-Extreme Priority principle, the Repugnant conclusion, the Sadistic conclusion, and the Anti-Egalitarian conclusion), maybe violating them all.
It doesn’t seem clear that it is always possible to keep “other stuff” constant when varying welfare?
I guess I don’t see how you’re defining mixed systems. My first version makes any axiology at all “mixed”, since you can just take the reliance on welfare to be trivial (which is a trivial example of a welfarist system).
If you have broader theorems about violating this set of conditions, they would be very interesting to know about.
Actually I’m not sure the anti-egalitarian conclusion is even well-formed for non-welfarist systems. You can look at welfare levels (if you think those exist) to get what looks like a form of the conclusion, but then we might say that what looks like it’s anti-egalitarian is not better because of the less equal arrangement of welfare, but for some other, non-welfare, reasons. Which doesn’t seem necessarily pathological (if you are happy with non-welfare reasons entering in).
It seems to me that the most defective assumption is that the well being of the whole is some linear combination of individual preferences, up to very large numbers, and in very atypical circumstances (e.g. involving copies).
You can’t expect that you could divide the universe into arbitrarily small cubes (10cm? 1cm? 1mm? 1nm?) and then measure each cube’s preferences or quality of life and sum those.
So, on the purely logical grounds, summing can not be the general rule that you can apply to all sorts of living things, including the living thing that is 3cm by 3cm by 3cm cube within your head.
I thus don’t see any good reason to keep privileging this assumption.
If you are a philosopher, and you want your paper to look scientific, then you need mathematical symbols and you need to do some algebra so it looks like something worthwhile is going on. In which case, by all means, go on and assume summation, this will help write a paper.
But if you are interested in studying actual human ethics, it is clear that we evaluate the whole in non-linear ways—we have to do that to as much as recognize an ethical value of a human despite not recognizing an ethical value of a quark—we don’t think that any individual quarks within a human feel pain, but we think that the whole does.
Conversely, for a very large number of pedophiles, we do recognize that an individual pedophile wants to watch child porn, but we do not sacrifice a child, however large the number of pedos is. And it is not any more incoherent than recognizing that an individual person has preferences but the elementary particles do not.