I disagree with your reasoning in this respect. It seems to me like I have some sort of moral duty to make sure creatures with certain values exist, and this is not only because of their utility, or the utility they bring to existing creatures.
For instance, if I had a choice between: A) Everyone on Earth dies in a disaster, later a species with humanlike values evolves or B) Everyone on Earth dies in a disaster, later a race of creatures with extremely unhumanlike values evolves, I would pick A, even though neither choice would increase the utility of anyone existing. I would even pick A if the total utility of the creatures in A was lower than the total utility of the creatures in B, as long as the total utility was positive.
I agree that ultimately, the only reason I am motivated to act in such a fashion is because of my desires and values. But that does not mean that morality reduces only to satisfying desires and values. It means that it is morally good to create such creatures, and I desire and value acting morally.
I would choose A even if it meant existing people would have to make some sort of small sacrifice before the disaster killed them.
And in the case of the the Repugnant Conclusion I might choose A or Q over Z even if every single person is created at the same time instead of one population existing prior to the other.
As I pointed out in my other response to your post, that doesn’t actually mean anything
I’m not referring to VNM utility, what I’m talking about is closer to what the Wikipedia entry calls “E-utility.” To summarize the difference, if I do something I’ll be miserable doing because it greatly benefits someone else and that is morally good, I will have higher VNM utility than if I didn’t do it, because VNM utility includes doing what is morally good as part of my utility function. My E-utility, however, will be lower than if I didn’t do it.
Interpersonal E-utility comparisons aren’t that hard. I do them all the time in my day-to-day life. That’s basically what the human capacity for empathy is. I use my capacity for empathy to model other people’s minds. In fact, if it wasn’t possible to do interpersonal E-utility comparisons between agents it’s kind of hard to see how humans would have ever even evolved the capacity for empathy..
I would choose A even if it meant existing people would have to make some sort of small sacrifice before the disaster killed them.
So would I. As evidenced by our choices, we care enough about A vs B that A is still better than B in terms of our (VNM) utility even if A requires us to make a small local sacrifice.
If you were talking about E-utility this entire time, then that changes everything. Our preferences over people’s E-utilities are tremendously complicated, so any ideal population ethics described in terms of E-utilities will also be tremendously complicated. It doesn’t help that E-utility is a pretty fuzzy concept.
Also, saying that you’re talking about E-utility doesn’t solve the problem of interpersonal utility comparison. Humans tend to have similar desires as other humans, so comparing their utility isn’t too hard in practice. But how would you compare the E-utilities experienced by a human and a paperclip maximizer?
If you were talking about E-utility this entire time, then that changes everything.
I was, do you think I should make a note of that somewhere in the OP? I should have realized that on a site that talks about decision theory so often I might give the impression I was talking about VNM instead of E.
But how would you compare the E-utilities experienced by a human and a paperclip maximizer?
That is difficult, to be sure. Some kind of normalizing assumption is probably neccessary. One avenue of attack would be the concept of a “life worth living,” for a human it would be a life where positive experiences outweighed the negative, for a paperclipper one where its existence resulted in more paperclips than not.
It may be that the further we get away from the human psyches the fuzzier the comparisons get. I can tell that a paperclipper whose existence has resulted in the destruction of a thousand paperclips has a lower E-utility than a human who lives a life very much worth living. But I have trouble seeing how to determine how much E-utility a paperclipper who has made a positive number of paperclips has compared to that person.
I was, do you think I should make a note of that somewhere in the OP?
That might be a good idea.
One avenue of attack would be the concept of a “life worth living,”
That tells you what zero utility is, but it doesn’t give you scale.
for a paperclipper one where its existence resulted in more paperclips than not. … I can tell that a paperclipper whose existence has resulted in the destruction of a thousand paperclips has a lower E-utility than a human who lives a life very much worth living. But I have trouble seeing how to determine how much E-utility a paperclipper who has made a positive number of paperclips has compared to that person.
It sounds like you’re equating E-utility with VNM utility for paperclippers. It seems more intuitive to me to say that paperclippers don’t have E-utilities, because it isn’t their experiences that they care about.
It sounds like you’re equating E-utility with VNM utility for paperclippers. It seems more intuitive to me to say that paperclippers don’t have E-utilities, because it isn’t their experiences that they care about.
That’s probably right. That also bring up what I consider an issue in describing the utility of humans. Right now we are basically dividing the VNM of humans into E-utility and what might be termed “moral utility” or “M-utility.” I’m wondering if there is anything else. That is, I wonder if human beings have any desires that are not either desires to have certain experiences, or desires to do something they believe is morally right. Maybe you could call it “Nonpersonal nonmoral utility,” or “NN utility for short.”
I wracked my brain and I can’t think of any desires I have that do not fit the categories of M or E utility. But maybe I’m not just thinking hard enough. Paperclippers are obviously nothing but NN utility, but I wonder if it’s present in humans at all.
Aesthetics?
ETA: I seem to be able to coherently desire the existence of artwork that I will never see and the completion of processes I will not interact with.
Presumably other people will see and enjoy that artwork, so aesthetics in that case might be a form of morality (you care about other people enjoying art, even if you never see that art yourself).
On the other hand, if you desire the existence of an aesthetically beautiful* natural rock formation, even if it was on a lifeless planet that no one would ever visit, that might count as NN utility.
*Technically the rock formation wouldn’t be beautiful since something’s beauty is a property of the mind beholding it, not a property of the object. But then you could just steel man that statement to say you desire the existence of a rock formation that would be found beautiful by a mind beholding it, even if no mind ever beholds it.
Actually, my aesthetics seem to be based as much on very powerful and possibly baseless intuitions of “completeness” as they do about beauty.
Omega offers to create a dead universe containing a printout of all possible chess games, and a printout of all possible chess games minus n randomly selected ones; my absurdly strong preference for the former is unaffected by his stipulation that no agent will ever interact with said universe and that I myself will immediately forget the conversation. And I don’t even like chess.
What’s the difference between M-utility and NN-utility? Does it have to do with the psychology of why you have the preference? What if there’s an alien with radically different psychology from us, and they developed morality-like preferences to help them cooperate, but they don’t think about them the way we think about morality? Would they have M-utility? Also, the separation between E-utility and M/NN-utility will get fuzzy if we can make uploads.
What’s the difference between M-utility and NN-utility? Does it have to do with the psychology of why you have the preference?
It’s hard to explicitly define the difference. I feel like preferring that other people have positive utility, and preferring that human beings exist instead of paperclippers, is a moral preference, whereas preferring that paperclips exist is an NN preference. So maybe M-utility involves my nonpersonal preferences that are about other people in some way, while NN preferences are nonpersonal preferences about things other than people.
I disagree with your reasoning in this respect. It seems to me like I have some sort of moral duty to make sure creatures with certain values exist, and this is not only because of their utility, or the utility they bring to existing creatures.
For instance, if I had a choice between: A) Everyone on Earth dies in a disaster, later a species with humanlike values evolves or B) Everyone on Earth dies in a disaster, later a race of creatures with extremely unhumanlike values evolves, I would pick A, even though neither choice would increase the utility of anyone existing. I would even pick A if the total utility of the creatures in A was lower than the total utility of the creatures in B, as long as the total utility was positive.
I agree that ultimately, the only reason I am motivated to act in such a fashion is because of my desires and values. But that does not mean that morality reduces only to satisfying desires and values. It means that it is morally good to create such creatures, and I desire and value acting morally.
A is better than B for currently existing people.
As I pointed out in my other response to your post, that doesn’t actually mean anything
I would choose A even if it meant existing people would have to make some sort of small sacrifice before the disaster killed them.
And in the case of the the Repugnant Conclusion I might choose A or Q over Z even if every single person is created at the same time instead of one population existing prior to the other.
I’m not referring to VNM utility, what I’m talking about is closer to what the Wikipedia entry calls “E-utility.” To summarize the difference, if I do something I’ll be miserable doing because it greatly benefits someone else and that is morally good, I will have higher VNM utility than if I didn’t do it, because VNM utility includes doing what is morally good as part of my utility function. My E-utility, however, will be lower than if I didn’t do it.
Interpersonal E-utility comparisons aren’t that hard. I do them all the time in my day-to-day life. That’s basically what the human capacity for empathy is. I use my capacity for empathy to model other people’s minds. In fact, if it wasn’t possible to do interpersonal E-utility comparisons between agents it’s kind of hard to see how humans would have ever even evolved the capacity for empathy..
So would I. As evidenced by our choices, we care enough about A vs B that A is still better than B in terms of our (VNM) utility even if A requires us to make a small local sacrifice.
If you were talking about E-utility this entire time, then that changes everything. Our preferences over people’s E-utilities are tremendously complicated, so any ideal population ethics described in terms of E-utilities will also be tremendously complicated. It doesn’t help that E-utility is a pretty fuzzy concept.
Also, saying that you’re talking about E-utility doesn’t solve the problem of interpersonal utility comparison. Humans tend to have similar desires as other humans, so comparing their utility isn’t too hard in practice. But how would you compare the E-utilities experienced by a human and a paperclip maximizer?
I was, do you think I should make a note of that somewhere in the OP? I should have realized that on a site that talks about decision theory so often I might give the impression I was talking about VNM instead of E.
That is difficult, to be sure. Some kind of normalizing assumption is probably neccessary. One avenue of attack would be the concept of a “life worth living,” for a human it would be a life where positive experiences outweighed the negative, for a paperclipper one where its existence resulted in more paperclips than not.
It may be that the further we get away from the human psyches the fuzzier the comparisons get. I can tell that a paperclipper whose existence has resulted in the destruction of a thousand paperclips has a lower E-utility than a human who lives a life very much worth living. But I have trouble seeing how to determine how much E-utility a paperclipper who has made a positive number of paperclips has compared to that person.
That might be a good idea.
That tells you what zero utility is, but it doesn’t give you scale.
It sounds like you’re equating E-utility with VNM utility for paperclippers. It seems more intuitive to me to say that paperclippers don’t have E-utilities, because it isn’t their experiences that they care about.
I added a footnote.
That’s probably right. That also bring up what I consider an issue in describing the utility of humans. Right now we are basically dividing the VNM of humans into E-utility and what might be termed “moral utility” or “M-utility.” I’m wondering if there is anything else. That is, I wonder if human beings have any desires that are not either desires to have certain experiences, or desires to do something they believe is morally right. Maybe you could call it “Nonpersonal nonmoral utility,” or “NN utility for short.”
I wracked my brain and I can’t think of any desires I have that do not fit the categories of M or E utility. But maybe I’m not just thinking hard enough. Paperclippers are obviously nothing but NN utility, but I wonder if it’s present in humans at all.
Aesthetics? ETA: I seem to be able to coherently desire the existence of artwork that I will never see and the completion of processes I will not interact with.
Presumably other people will see and enjoy that artwork, so aesthetics in that case might be a form of morality (you care about other people enjoying art, even if you never see that art yourself).
On the other hand, if you desire the existence of an aesthetically beautiful* natural rock formation, even if it was on a lifeless planet that no one would ever visit, that might count as NN utility.
*Technically the rock formation wouldn’t be beautiful since something’s beauty is a property of the mind beholding it, not a property of the object. But then you could just steel man that statement to say you desire the existence of a rock formation that would be found beautiful by a mind beholding it, even if no mind ever beholds it.
Actually, my aesthetics seem to be based as much on very powerful and possibly baseless intuitions of “completeness” as they do about beauty.
Omega offers to create a dead universe containing a printout of all possible chess games, and a printout of all possible chess games minus n randomly selected ones; my absurdly strong preference for the former is unaffected by his stipulation that no agent will ever interact with said universe and that I myself will immediately forget the conversation. And I don’t even like chess.
What’s the difference between M-utility and NN-utility? Does it have to do with the psychology of why you have the preference? What if there’s an alien with radically different psychology from us, and they developed morality-like preferences to help them cooperate, but they don’t think about them the way we think about morality? Would they have M-utility? Also, the separation between E-utility and M/NN-utility will get fuzzy if we can make uploads.
It’s hard to explicitly define the difference. I feel like preferring that other people have positive utility, and preferring that human beings exist instead of paperclippers, is a moral preference, whereas preferring that paperclips exist is an NN preference. So maybe M-utility involves my nonpersonal preferences that are about other people in some way, while NN preferences are nonpersonal preferences about things other than people.