Maybe natural selection is quite like that scientist
The survival instinct part, very probably, but the “constant misery” part doesn’t look likely.
Actually, I don’t understand where the “animals have negative utility” thing is coming from. Sure, let’s postulate that fish can feel pain. So what? How do you know that fish don’t experience intense pleasure from feeling water stream by their sides?
I just don’t see any reasonable basis for deciding what the utility balance for most animals looks like. And from the evolutionary standpoint the “constant misery” is nonsense—constant stress is not conducive to survival.
fear of consequences in an afterlife
Are we talking about humans now? I thought the OP considered humans to be more or less fine, it’s the animals that were the problem.
Does anyone claim that the net utility of humanity is negative?
“Is a life net positive in terms of all the experience moments it adds to the universe’s playlist?”
I have no idea what this means.
not an empirical question; it’s more of an aesthetic judgment
Ah. Well then, let’s kill everyone who fails our aesthetic judgment..?
then some would claim I have an obligation … and I’d be doing harm to their preferences
That’s a very common attitude—see e.g. attitudes to abortion, to optional wars, etc. However “paternalistic” implies an imbalance of power—you can’t be paternalistic to an equal.
The survival instinct part, very probably, but the “constant misery” part doesn’t look likely.
Agree, I meant to use the analogy to argue for “Natural selection made sure that even those beings in constant misery may not necessarily exhibit suicidal behavior.” (I do hold the view that animals in nature suffer a lot more than they are happy, but that doesn’t follow from anything I wrote in the above post.)
Are we talking about humans now? I thought the OP considered humans to be more or less fine, it’s the animals that were the problem.
Right, but I thought your argument about sentient beings not committing suicide refers to humans primarily. At least with regard to humans, exploring why the appeal to low suicide rates may not show much seems more challenging. Animals not killing themselves could just be due to them lacking the relevant mental concepts.
I have no idea what this means.
It’s a metaphor. Views on population ethics reflect what we want the “playlist” of all the universe’s experience moments to be like, and there’s no objective sense of “net utility being positive” or not. Except when you question-beggingly define “net utility” in a way that implies a conclusion, but then anyone who disagrees will just say “I don’t think we should define utility that way” and you’re left arguing over the same differences. That’s why I called it “aesthetic” even though that feels like it doesn’t give the seriousness of our moral intuitions due justice.
Ah. Well then, let’s kill everyone who fails our aesthetic judgment..?
(And force everyone to live against their will if they do conform to it?) No; I specifically said not to do that. Viewing morality as subjective is supposed to make people more appreciative that they cannot go around completely violating the preferences of those they disagree with without the result being worse for everyone.
“Natural selection made sure that even those beings in constant misery may not necessarily exhibit suicidal behavior.”
Not sure this is the case. I would expect that natural selection made sure that no being is systematically in constant misery and so there is no need for the “but if you are in constant misery you can’t suicide anyways” part.
Views on population ethics
I still don’t understand what that means. Are you talking about believing that other people should have particular ethical views and it’s bad if they don’t?
No; I specifically said not to do that.
Well, the OP thinks it might be reasonable to kill everything with a nervous system because in his view all of them suffer too much. However if that is just an aesthetic judgement...
without the result being worse for everyone
Well, clearly not everyone since you will have winners and losers. And to evaluate this on the basis of some average/combined utility requires you to be a particular kind of utilitarian.
I still don’t understand what that means. Are you talking about believing that other people should have particular ethical views and it’s bad if they don’t?
I’m trying to say that other people are going to disagree with you or me about how to assess whether a given life is worth continuing or worth bringing into existence (big difference according to some views!), and on how to rank populations that differ in size and the quality of the lives in them. These are questions that the discipline of population ethics deals with, and my point is that there’s no right answer (and probably also no “safe” answer where you won’t end up disagreeing with others).
This^^ is all about a “morality as altruism” view, where you contemplate what it means to “make the world better for other beings.” I think this part is subjective.
There is also a very prominent “morality as cooperation/contract” view, where you contemplate the implications of decision algorithms correlating with each other, and notice that it might be a bad idea to adhere to principles that lead to outcomes worse for everyone in expectation provided that other people (in sufficiently similar situations) follow the same principles. This is where people start with whatever goals/preferences they have and derive reasons to be nice and civil to others (provided they are on an equal footing) from decision theory and stuff. I wholeheartedly agree with all of this and would even say it’s “objective” – but I would call it something like “pragmatics for civil society” or maybe “decision theoretic reasons for cooperation” and not “morality,” which is the term I reserve for (ways of) caring about the well-being of others.
It’s pretty clearly apparent that “killing everyone on earth” is not in most people’s interest, and I appreciate that people are pointing this out to the OP. However, I think what the replies are missing is that there is a second dimension, namely whether we should be morally glad about the world as it currently exists, and whether e.g. we should make more worlds that are exactly like ours, for the sake of the not-yet-born inhabitants of these new worlds. This is what I compared to voting on what the universe’s playlist of experience moments should be like.
But I’m starting to dislike the analogy. Let’s say that existing people have aesthetic preferences about how to allocate resources (this includes things like wanting to rebuild galaxies into a huge replica of Simpsons characters because it’s cool), and of these, a subset are simultaneously also moral preferences in that they are motivated by a desire to do good for others, and these moral preferences can differ in whether they count it as important to bring about new happy beings or not, or how much extra happiness is needed to altruistically “compensate” (if that’s even possible) for the harm of a given amount of suffering, etc. And the domain where people compare each others’ moral preferences and try to see if they can get more convergence through arguments and intuition pumps, in the same sense as someone might start to appreciate Mozart more after studying music theory or whatever, is population ethics (or “moral axiology”).
These are questions that the discipline of population ethics deals with
So is this discipline basically about ethics of imposing particular choices on other people (aka the “population”)? That makes it basically the ethics of power or ethics of the ruler(s).
You also call it “morality as altruism”, but I think there is a great deal of difference between having power to impose your own perceptions of “better” (“it’s for your own good”) and not having such power, being limited to offering suggestions and accepting that some/most will be rejected.
“morality as cooperation/contract” view
What happens with this view if you accept that diversity will exist and at least some other people will NOT follow the same principles? Simple game theory analyses in a monoculture environment are easy to do, but have very little relationship to real life.
whether we should be morally glad about the world as it currently exists
That looks to me like a continuous (and probably multidimensional) value. All moralities operate in terms of “should” an none find the world as it is to be perfect. This means that all contemplate the gap between “is” and “should be” and this gap can be seen as great or as not that significant.
whether e.g. we should make more worlds that are exactly like ours
So is this discipline basically about ethics of imposing particular choices on other people (aka the “population”)? That makes it basically the ethics of power or ethics of the ruler(s).
That’s an interesting way to view it, but it seems accurate. Say God created the world, then contractualist ethics or ethics of cooperation didn’t apply to him, but we’d get a sense of what his population ethical stance must have been. No one ever gets asked whether they want to be born. This is one of the issues where there is no such thing as “not taking a stance;” how we act in our lifetimes is going to affect what sort of minds there will or won’t be in the far future. We can discuss suggestions and try to come to a consensus of those currently in power, but future generations are indeed in a powerless position.
how we act in our lifetimes is going to affect what sort of minds there will or won’t be in the far future
Sure, but so what? Your power to deliberately and purposefully affect these things is limited by your ability to understand and model the development of the world sufficiently well to know which levers to pull. I would like to suggest that for “far future” that power is indistinguishable from zero.
The survival instinct part, very probably, but the “constant misery” part doesn’t look likely.
Actually, I don’t understand where the “animals have negative utility” thing is coming from. Sure, let’s postulate that fish can feel pain. So what? How do you know that fish don’t experience intense pleasure from feeling water stream by their sides?
I just don’t see any reasonable basis for deciding what the utility balance for most animals looks like. And from the evolutionary standpoint the “constant misery” is nonsense—constant stress is not conducive to survival.
Are we talking about humans now? I thought the OP considered humans to be more or less fine, it’s the animals that were the problem.
Does anyone claim that the net utility of humanity is negative?
I have no idea what this means.
Ah. Well then, let’s kill everyone who fails our aesthetic judgment..?
That’s a very common attitude—see e.g. attitudes to abortion, to optional wars, etc. However “paternalistic” implies an imbalance of power—you can’t be paternalistic to an equal.
Agree, I meant to use the analogy to argue for “Natural selection made sure that even those beings in constant misery may not necessarily exhibit suicidal behavior.” (I do hold the view that animals in nature suffer a lot more than they are happy, but that doesn’t follow from anything I wrote in the above post.)
Right, but I thought your argument about sentient beings not committing suicide refers to humans primarily. At least with regard to humans, exploring why the appeal to low suicide rates may not show much seems more challenging. Animals not killing themselves could just be due to them lacking the relevant mental concepts.
It’s a metaphor. Views on population ethics reflect what we want the “playlist” of all the universe’s experience moments to be like, and there’s no objective sense of “net utility being positive” or not. Except when you question-beggingly define “net utility” in a way that implies a conclusion, but then anyone who disagrees will just say “I don’t think we should define utility that way” and you’re left arguing over the same differences. That’s why I called it “aesthetic” even though that feels like it doesn’t give the seriousness of our moral intuitions due justice.
(And force everyone to live against their will if they do conform to it?) No; I specifically said not to do that. Viewing morality as subjective is supposed to make people more appreciative that they cannot go around completely violating the preferences of those they disagree with without the result being worse for everyone.
Lukas, I wish you had a bigger role in this community.
Not sure this is the case. I would expect that natural selection made sure that no being is systematically in constant misery and so there is no need for the “but if you are in constant misery you can’t suicide anyways” part.
I still don’t understand what that means. Are you talking about believing that other people should have particular ethical views and it’s bad if they don’t?
Well, the OP thinks it might be reasonable to kill everything with a nervous system because in his view all of them suffer too much. However if that is just an aesthetic judgement...
Well, clearly not everyone since you will have winners and losers. And to evaluate this on the basis of some average/combined utility requires you to be a particular kind of utilitarian.
I’m trying to say that other people are going to disagree with you or me about how to assess whether a given life is worth continuing or worth bringing into existence (big difference according to some views!), and on how to rank populations that differ in size and the quality of the lives in them. These are questions that the discipline of population ethics deals with, and my point is that there’s no right answer (and probably also no “safe” answer where you won’t end up disagreeing with others).
This^^ is all about a “morality as altruism” view, where you contemplate what it means to “make the world better for other beings.” I think this part is subjective.
There is also a very prominent “morality as cooperation/contract” view, where you contemplate the implications of decision algorithms correlating with each other, and notice that it might be a bad idea to adhere to principles that lead to outcomes worse for everyone in expectation provided that other people (in sufficiently similar situations) follow the same principles. This is where people start with whatever goals/preferences they have and derive reasons to be nice and civil to others (provided they are on an equal footing) from decision theory and stuff. I wholeheartedly agree with all of this and would even say it’s “objective” – but I would call it something like “pragmatics for civil society” or maybe “decision theoretic reasons for cooperation” and not “morality,” which is the term I reserve for (ways of) caring about the well-being of others.
It’s pretty clearly apparent that “killing everyone on earth” is not in most people’s interest, and I appreciate that people are pointing this out to the OP. However, I think what the replies are missing is that there is a second dimension, namely whether we should be morally glad about the world as it currently exists, and whether e.g. we should make more worlds that are exactly like ours, for the sake of the not-yet-born inhabitants of these new worlds. This is what I compared to voting on what the universe’s playlist of experience moments should be like.
But I’m starting to dislike the analogy. Let’s say that existing people have aesthetic preferences about how to allocate resources (this includes things like wanting to rebuild galaxies into a huge replica of Simpsons characters because it’s cool), and of these, a subset are simultaneously also moral preferences in that they are motivated by a desire to do good for others, and these moral preferences can differ in whether they count it as important to bring about new happy beings or not, or how much extra happiness is needed to altruistically “compensate” (if that’s even possible) for the harm of a given amount of suffering, etc. And the domain where people compare each others’ moral preferences and try to see if they can get more convergence through arguments and intuition pumps, in the same sense as someone might start to appreciate Mozart more after studying music theory or whatever, is population ethics (or “moral axiology”).
Of course, that’s a given.
So is this discipline basically about ethics of imposing particular choices on other people (aka the “population”)? That makes it basically the ethics of power or ethics of the ruler(s).
You also call it “morality as altruism”, but I think there is a great deal of difference between having power to impose your own perceptions of “better” (“it’s for your own good”) and not having such power, being limited to offering suggestions and accepting that some/most will be rejected.
What happens with this view if you accept that diversity will exist and at least some other people will NOT follow the same principles? Simple game theory analyses in a monoculture environment are easy to do, but have very little relationship to real life.
That looks to me like a continuous (and probably multidimensional) value. All moralities operate in terms of “should” an none find the world as it is to be perfect. This means that all contemplate the gap between “is” and “should be” and this gap can be seen as great or as not that significant.
Ask me when you acquire the capability :-)
That’s an interesting way to view it, but it seems accurate. Say God created the world, then contractualist ethics or ethics of cooperation didn’t apply to him, but we’d get a sense of what his population ethical stance must have been.
No one ever gets asked whether they want to be born. This is one of the issues where there is no such thing as “not taking a stance;” how we act in our lifetimes is going to affect what sort of minds there will or won’t be in the far future. We can discuss suggestions and try to come to a consensus of those currently in power, but future generations are indeed in a powerless position.
Sure, but so what? Your power to deliberately and purposefully affect these things is limited by your ability to understand and model the development of the world sufficiently well to know which levers to pull. I would like to suggest that for “far future” that power is indistinguishable from zero.