To rephrase: I accept that utilitarianism is the correct way to extrapolate our moral intuitions into a coherent generalizable framework. I feel no ‘should’ about it—no need to apply that framework to myself—and feel no cognitive dissonance when I recognize that an action I wish to perform is immoral, if it hurts only people I don’t care about.
Ultimately I think that is the way all utilitarianism works. You define an in group of people who are important, effectively equivalently important to each other and possibly equivalently important to yourself.
For most modern utilitarians, the in-group is all humans. Some modern utilitarians put mammals with relatively complex nervous systems in the group, and for the most part become vegetarians. Others put everything with a nervous system in there and for the most part become vegans. Very darn few put all life forms in there as they would starve. Implicit in this is that all life forms would place negative utility on being killed to be eaten which may be reasonable or may be projection of human values on to non-human entities.
But logically it makes as much sense to shrink the group you are utilitarian about as to expand it. Only Americans seems like a popular one in the US when discussing immigration policy. Only my friends and family has a following. Only LA Raiders fans or Manchester United fans seems to also gather its proponents.
Around here, I think you find people trying to put all thinking things, even mechanical, in the in-group, perhaps only all conscious thinking things. Maybe the way to create a friendly AI would be to make sure the AI never values its own life more than it values its own death, then we would always be able to turn it off without it fighting back.
Also, I suspect in reality you have a sliding scale of acceptance, that you would not be morally neutral about killing a stranger on the road and taking their money if you thought you could get away with it. But you certainly won’t accord the stranger the full benefit of your concern, just a partial benefit.
Also, I suspect in reality you have a sliding scale of acceptance, that you would not be morally neutral about killing a stranger on the road and taking their money if you thought you could get away with it. But you certainly won’t accord the stranger the full benefit of your concern, just a partial benefit.
Oh, there are definitely gradations. I probably wouldn’t do this, even if I could get away with it. I don’t care enough about strangers to go out of my way to save them, but neither do I want to kill them. On the other hand, if it was a person I had an active dislike for, I probably would. All of which is basically irrelevant, since it presupposes the incredibly unlikely “if I thought I could get away with it”.
I used to think I thought that way, but then I had some opportunities to casually steal from people I didn’t know (and easily get away with it), but I didn’t. With that said, I pirate things all the time despite believing that doing so frequently harms the content owners a little.
I have taken that precise action against someone who mildly annoyed me. I remember it (and the perceived slight that motivated it), but feel no guilt over it.
To rephrase: I accept that utilitarianism is the correct way to extrapolate our moral intuitions into a coherent generalizable framework. I feel no ‘should’ about it—no need to apply that framework to myself—and feel no cognitive dissonance when I recognize that an action I wish to perform is immoral, if it hurts only people I don’t care about.
Ultimately I think that is the way all utilitarianism works. You define an in group of people who are important, effectively equivalently important to each other and possibly equivalently important to yourself.
For most modern utilitarians, the in-group is all humans. Some modern utilitarians put mammals with relatively complex nervous systems in the group, and for the most part become vegetarians. Others put everything with a nervous system in there and for the most part become vegans. Very darn few put all life forms in there as they would starve. Implicit in this is that all life forms would place negative utility on being killed to be eaten which may be reasonable or may be projection of human values on to non-human entities.
But logically it makes as much sense to shrink the group you are utilitarian about as to expand it. Only Americans seems like a popular one in the US when discussing immigration policy. Only my friends and family has a following. Only LA Raiders fans or Manchester United fans seems to also gather its proponents.
Around here, I think you find people trying to put all thinking things, even mechanical, in the in-group, perhaps only all conscious thinking things. Maybe the way to create a friendly AI would be to make sure the AI never values its own life more than it values its own death, then we would always be able to turn it off without it fighting back.
Also, I suspect in reality you have a sliding scale of acceptance, that you would not be morally neutral about killing a stranger on the road and taking their money if you thought you could get away with it. But you certainly won’t accord the stranger the full benefit of your concern, just a partial benefit.
Oh, there are definitely gradations. I probably wouldn’t do this, even if I could get away with it. I don’t care enough about strangers to go out of my way to save them, but neither do I want to kill them. On the other hand, if it was a person I had an active dislike for, I probably would. All of which is basically irrelevant, since it presupposes the incredibly unlikely “if I thought I could get away with it”.
I used to think I thought that way, but then I had some opportunities to casually steal from people I didn’t know (and easily get away with it), but I didn’t. With that said, I pirate things all the time despite believing that doing so frequently harms the content owners a little.
I have taken that precise action against someone who mildly annoyed me. I remember it (and the perceived slight that motivated it), but feel no guilt over it.