I value other human beings, because I value the processes that go on inside my own head, and can recognize the same processes at work in others, thanks to my in-built empathy and theory of the mind. As such, I prefer that good things happen to them rather than bad. There isn’t any universal ‘shouldness’ to it, it’s just the way that I’d rather things be. And, since most other humans have similar values, we can work together, arm in arm. Our values converge rather than diverge. That’s morality.
I extend that value to those of different races and cultures, because I can see that they embody the same conscious processes that I value. I do not extend that same value to brain dead people, fetuses, or chickens, because I don’t see that value present within them. The same goes for a machine that has a very alien cognitive architecture and doesn’t implement the cognitive algorithms that I value.
If you’re describing how you expect you’d act based on your feelings, then why do their algorithms matter? I would think your feelings would respond to their appearance and behavior.
There’s a very large space of possible algorithms, but the space of reasonable behaviors given the same circumstances is quite small. Humans, being irrational, often deviate bizarrely from the behavior I expect in a given circumstance—more so than any AI probably would.
That’s… an odd way of thinking about morality.
I value other human beings, because I value the processes that go on inside my own head, and can recognize the same processes at work in others, thanks to my in-built empathy and theory of the mind. As such, I prefer that good things happen to them rather than bad. There isn’t any universal ‘shouldness’ to it, it’s just the way that I’d rather things be. And, since most other humans have similar values, we can work together, arm in arm. Our values converge rather than diverge. That’s morality.
I extend that value to those of different races and cultures, because I can see that they embody the same conscious processes that I value. I do not extend that same value to brain dead people, fetuses, or chickens, because I don’t see that value present within them. The same goes for a machine that has a very alien cognitive architecture and doesn’t implement the cognitive algorithms that I value.
If you’re describing how you expect you’d act based on your feelings, then why do their algorithms matter? I would think your feelings would respond to their appearance and behavior.
There’s a very large space of possible algorithms, but the space of reasonable behaviors given the same circumstances is quite small. Humans, being irrational, often deviate bizarrely from the behavior I expect in a given circumstance—more so than any AI probably would.