If you don’t understand the distinction I’m making above, consider a case of the AI having to decide whether to save my own child vs saving a thousand random other children. I’d prefer the former, but I believe the latter would be the morally superior choice.
Wow there is so much wrapped up in this little consideration. The heart of the issue is that we (by which I mean you, but I share your delimma) have truly conflicting preferences.
Honestly I think you should not be afraid to say that saving your own child is the moral thing to do. And you don’t have to give excuses either—it’s not that “if everyone saved their own child, then everyone’s child will be looked after” or anything like that. No, the desire to save your own child is firmly rooted in our basic drives and preferences, enough so that we can go quite far in calling it a basic foundational moral axiom. It’s not actually axiomatic, but we can safely treat it as such.
At the same time we have a basic preference to seek social acceptance and find commonality with the people we let into our lives. This drives us to want outcomes that are universally or at least most-widely acceptable, and seek moral frameworks like utilitarianism which lead to these outcomes. Usually this drive is secondary to self-serving preferences for most people, and that is OK.
For some reason you’ve called making decisions in favor of self-serving drives “preferences” and decisions in favor of social drives “morality.” But the underlying mechanism is the same.
“But wait, if I choose self-serving drives over social conformity, doesn’t that lead to me to make the decision to save one life in exclusion to 1000 others?” Yes, yes it does. This massive sub-thread started with me objecting to the idea that some “friendly” AI somewhere could derive morality experimentally from my preferences or the collective preferences of humankind, make it consistent, apply the result universally, and that I’d be OK with that outcome. But that cannot work because there is not, and cannot be a universal morality that satisfies everyone—every one of those thousand other children have parents that want their kid to survive and would see your child dead if need be.
Honestly I think you should not be afraid to say that saving your own child is the moral thing to do
What do you mean by “should not”?
and that is OK.
What do you mean by “OK”?
For some reason you’ve called making decisions in favor of self-serving drives “preferences” and decisions in favor of social drives “morality.” But the underlying mechanism is the same.
Show me the neurological studies that prove it.
But that cannot work because there is not, and cannot be a universal morality that satisfies everyone—every one of those thousand other children have parents that want their kid to survive and would see your child dead if need be.
Yes, and yet if none of the children were mine, and if I wasn’t involved in the situation at all, I would say “save the 1000 children rather than the 1”. And if someone else, also not personally involved, could make the choice and chose to flip a coin instead in order to decide, I’d be morally outraged at them.
You can now give me a bunch of reasons of why this is just preference, while at the same time EVERYTHING about it (how I arrive to my judgment, how I feel about the judgment of others) makes it a whole distinct category of its own. I’m fine with abolishing useless categories when there’s no meaningful distinction, but all you people should stop trying to abolish categories where there pretty damn obviously IS one.
I suspect that he means something like ’Even though utilitarianism (on LW) and altruism (in general) are considered to be what morality is, you should not let that discourage you from asserting that selfishly saving your own child is the right thing to do”. (Feel free to correct me if I’m wrong.)
I’m fine with abolishing useless categories when there’s no meaningful distinction, but all you people should stop trying to abolish categories where there pretty damn obviously IS one.
I’ve explained to you twice now how the two underlying mechanisms are unified, and pointed to Eliezer’s quite good explanation on the matter. I don’t see the need to go through that again.
Wow there is so much wrapped up in this little consideration. The heart of the issue is that we (by which I mean you, but I share your delimma) have truly conflicting preferences.
Honestly I think you should not be afraid to say that saving your own child is the moral thing to do. And you don’t have to give excuses either—it’s not that “if everyone saved their own child, then everyone’s child will be looked after” or anything like that. No, the desire to save your own child is firmly rooted in our basic drives and preferences, enough so that we can go quite far in calling it a basic foundational moral axiom. It’s not actually axiomatic, but we can safely treat it as such.
At the same time we have a basic preference to seek social acceptance and find commonality with the people we let into our lives. This drives us to want outcomes that are universally or at least most-widely acceptable, and seek moral frameworks like utilitarianism which lead to these outcomes. Usually this drive is secondary to self-serving preferences for most people, and that is OK.
For some reason you’ve called making decisions in favor of self-serving drives “preferences” and decisions in favor of social drives “morality.” But the underlying mechanism is the same.
“But wait, if I choose self-serving drives over social conformity, doesn’t that lead to me to make the decision to save one life in exclusion to 1000 others?” Yes, yes it does. This massive sub-thread started with me objecting to the idea that some “friendly” AI somewhere could derive morality experimentally from my preferences or the collective preferences of humankind, make it consistent, apply the result universally, and that I’d be OK with that outcome. But that cannot work because there is not, and cannot be a universal morality that satisfies everyone—every one of those thousand other children have parents that want their kid to survive and would see your child dead if need be.
What do you mean by “should not”?
What do you mean by “OK”?
Show me the neurological studies that prove it.
Yes, and yet if none of the children were mine, and if I wasn’t involved in the situation at all, I would say “save the 1000 children rather than the 1”. And if someone else, also not personally involved, could make the choice and chose to flip a coin instead in order to decide, I’d be morally outraged at them.
You can now give me a bunch of reasons of why this is just preference, while at the same time EVERYTHING about it (how I arrive to my judgment, how I feel about the judgment of others) makes it a whole distinct category of its own. I’m fine with abolishing useless categories when there’s no meaningful distinction, but all you people should stop trying to abolish categories where there pretty damn obviously IS one.
I suspect that he means something like ’Even though utilitarianism (on LW) and altruism (in general) are considered to be what morality is, you should not let that discourage you from asserting that selfishly saving your own child is the right thing to do”. (Feel free to correct me if I’m wrong.)
Yes that is correct.
So you explained “should not” by using a sentence that also has “should not” in it.
I hope it’s a more clear “should not”.
I’ve explained to you twice now how the two underlying mechanisms are unified, and pointed to Eliezer’s quite good explanation on the matter. I don’t see the need to go through that again.