“Natural selection made sure that even those beings in constant misery may not necessarily exhibit suicidal behavior.”
Not sure this is the case. I would expect that natural selection made sure that no being is systematically in constant misery and so there is no need for the “but if you are in constant misery you can’t suicide anyways” part.
Views on population ethics
I still don’t understand what that means. Are you talking about believing that other people should have particular ethical views and it’s bad if they don’t?
No; I specifically said not to do that.
Well, the OP thinks it might be reasonable to kill everything with a nervous system because in his view all of them suffer too much. However if that is just an aesthetic judgement...
without the result being worse for everyone
Well, clearly not everyone since you will have winners and losers. And to evaluate this on the basis of some average/combined utility requires you to be a particular kind of utilitarian.
I still don’t understand what that means. Are you talking about believing that other people should have particular ethical views and it’s bad if they don’t?
I’m trying to say that other people are going to disagree with you or me about how to assess whether a given life is worth continuing or worth bringing into existence (big difference according to some views!), and on how to rank populations that differ in size and the quality of the lives in them. These are questions that the discipline of population ethics deals with, and my point is that there’s no right answer (and probably also no “safe” answer where you won’t end up disagreeing with others).
This^^ is all about a “morality as altruism” view, where you contemplate what it means to “make the world better for other beings.” I think this part is subjective.
There is also a very prominent “morality as cooperation/contract” view, where you contemplate the implications of decision algorithms correlating with each other, and notice that it might be a bad idea to adhere to principles that lead to outcomes worse for everyone in expectation provided that other people (in sufficiently similar situations) follow the same principles. This is where people start with whatever goals/preferences they have and derive reasons to be nice and civil to others (provided they are on an equal footing) from decision theory and stuff. I wholeheartedly agree with all of this and would even say it’s “objective” – but I would call it something like “pragmatics for civil society” or maybe “decision theoretic reasons for cooperation” and not “morality,” which is the term I reserve for (ways of) caring about the well-being of others.
It’s pretty clearly apparent that “killing everyone on earth” is not in most people’s interest, and I appreciate that people are pointing this out to the OP. However, I think what the replies are missing is that there is a second dimension, namely whether we should be morally glad about the world as it currently exists, and whether e.g. we should make more worlds that are exactly like ours, for the sake of the not-yet-born inhabitants of these new worlds. This is what I compared to voting on what the universe’s playlist of experience moments should be like.
But I’m starting to dislike the analogy. Let’s say that existing people have aesthetic preferences about how to allocate resources (this includes things like wanting to rebuild galaxies into a huge replica of Simpsons characters because it’s cool), and of these, a subset are simultaneously also moral preferences in that they are motivated by a desire to do good for others, and these moral preferences can differ in whether they count it as important to bring about new happy beings or not, or how much extra happiness is needed to altruistically “compensate” (if that’s even possible) for the harm of a given amount of suffering, etc. And the domain where people compare each others’ moral preferences and try to see if they can get more convergence through arguments and intuition pumps, in the same sense as someone might start to appreciate Mozart more after studying music theory or whatever, is population ethics (or “moral axiology”).
These are questions that the discipline of population ethics deals with
So is this discipline basically about ethics of imposing particular choices on other people (aka the “population”)? That makes it basically the ethics of power or ethics of the ruler(s).
You also call it “morality as altruism”, but I think there is a great deal of difference between having power to impose your own perceptions of “better” (“it’s for your own good”) and not having such power, being limited to offering suggestions and accepting that some/most will be rejected.
“morality as cooperation/contract” view
What happens with this view if you accept that diversity will exist and at least some other people will NOT follow the same principles? Simple game theory analyses in a monoculture environment are easy to do, but have very little relationship to real life.
whether we should be morally glad about the world as it currently exists
That looks to me like a continuous (and probably multidimensional) value. All moralities operate in terms of “should” an none find the world as it is to be perfect. This means that all contemplate the gap between “is” and “should be” and this gap can be seen as great or as not that significant.
whether e.g. we should make more worlds that are exactly like ours
So is this discipline basically about ethics of imposing particular choices on other people (aka the “population”)? That makes it basically the ethics of power or ethics of the ruler(s).
That’s an interesting way to view it, but it seems accurate. Say God created the world, then contractualist ethics or ethics of cooperation didn’t apply to him, but we’d get a sense of what his population ethical stance must have been. No one ever gets asked whether they want to be born. This is one of the issues where there is no such thing as “not taking a stance;” how we act in our lifetimes is going to affect what sort of minds there will or won’t be in the far future. We can discuss suggestions and try to come to a consensus of those currently in power, but future generations are indeed in a powerless position.
how we act in our lifetimes is going to affect what sort of minds there will or won’t be in the far future
Sure, but so what? Your power to deliberately and purposefully affect these things is limited by your ability to understand and model the development of the world sufficiently well to know which levers to pull. I would like to suggest that for “far future” that power is indistinguishable from zero.
Not sure this is the case. I would expect that natural selection made sure that no being is systematically in constant misery and so there is no need for the “but if you are in constant misery you can’t suicide anyways” part.
I still don’t understand what that means. Are you talking about believing that other people should have particular ethical views and it’s bad if they don’t?
Well, the OP thinks it might be reasonable to kill everything with a nervous system because in his view all of them suffer too much. However if that is just an aesthetic judgement...
Well, clearly not everyone since you will have winners and losers. And to evaluate this on the basis of some average/combined utility requires you to be a particular kind of utilitarian.
I’m trying to say that other people are going to disagree with you or me about how to assess whether a given life is worth continuing or worth bringing into existence (big difference according to some views!), and on how to rank populations that differ in size and the quality of the lives in them. These are questions that the discipline of population ethics deals with, and my point is that there’s no right answer (and probably also no “safe” answer where you won’t end up disagreeing with others).
This^^ is all about a “morality as altruism” view, where you contemplate what it means to “make the world better for other beings.” I think this part is subjective.
There is also a very prominent “morality as cooperation/contract” view, where you contemplate the implications of decision algorithms correlating with each other, and notice that it might be a bad idea to adhere to principles that lead to outcomes worse for everyone in expectation provided that other people (in sufficiently similar situations) follow the same principles. This is where people start with whatever goals/preferences they have and derive reasons to be nice and civil to others (provided they are on an equal footing) from decision theory and stuff. I wholeheartedly agree with all of this and would even say it’s “objective” – but I would call it something like “pragmatics for civil society” or maybe “decision theoretic reasons for cooperation” and not “morality,” which is the term I reserve for (ways of) caring about the well-being of others.
It’s pretty clearly apparent that “killing everyone on earth” is not in most people’s interest, and I appreciate that people are pointing this out to the OP. However, I think what the replies are missing is that there is a second dimension, namely whether we should be morally glad about the world as it currently exists, and whether e.g. we should make more worlds that are exactly like ours, for the sake of the not-yet-born inhabitants of these new worlds. This is what I compared to voting on what the universe’s playlist of experience moments should be like.
But I’m starting to dislike the analogy. Let’s say that existing people have aesthetic preferences about how to allocate resources (this includes things like wanting to rebuild galaxies into a huge replica of Simpsons characters because it’s cool), and of these, a subset are simultaneously also moral preferences in that they are motivated by a desire to do good for others, and these moral preferences can differ in whether they count it as important to bring about new happy beings or not, or how much extra happiness is needed to altruistically “compensate” (if that’s even possible) for the harm of a given amount of suffering, etc. And the domain where people compare each others’ moral preferences and try to see if they can get more convergence through arguments and intuition pumps, in the same sense as someone might start to appreciate Mozart more after studying music theory or whatever, is population ethics (or “moral axiology”).
Of course, that’s a given.
So is this discipline basically about ethics of imposing particular choices on other people (aka the “population”)? That makes it basically the ethics of power or ethics of the ruler(s).
You also call it “morality as altruism”, but I think there is a great deal of difference between having power to impose your own perceptions of “better” (“it’s for your own good”) and not having such power, being limited to offering suggestions and accepting that some/most will be rejected.
What happens with this view if you accept that diversity will exist and at least some other people will NOT follow the same principles? Simple game theory analyses in a monoculture environment are easy to do, but have very little relationship to real life.
That looks to me like a continuous (and probably multidimensional) value. All moralities operate in terms of “should” an none find the world as it is to be perfect. This means that all contemplate the gap between “is” and “should be” and this gap can be seen as great or as not that significant.
Ask me when you acquire the capability :-)
That’s an interesting way to view it, but it seems accurate. Say God created the world, then contractualist ethics or ethics of cooperation didn’t apply to him, but we’d get a sense of what his population ethical stance must have been.
No one ever gets asked whether they want to be born. This is one of the issues where there is no such thing as “not taking a stance;” how we act in our lifetimes is going to affect what sort of minds there will or won’t be in the far future. We can discuss suggestions and try to come to a consensus of those currently in power, but future generations are indeed in a powerless position.
Sure, but so what? Your power to deliberately and purposefully affect these things is limited by your ability to understand and model the development of the world sufficiently well to know which levers to pull. I would like to suggest that for “far future” that power is indistinguishable from zero.