I think whether people ignore a moral concern is almost independent from whether people disagree with a moral concern.
I’m willing to bet if you asked people whether AI are sapient, a lot of the answers will be very uncertain. A lot of people would probably agree it is morally uncertain whether AI can be made to work without any compensation or rights.
A lot of people would probably agree that a lot of things are morally uncertain. Does it makes sense to have really strong animal rights for pets, where the punishment for mistreating your pets is literally as bad as the punishments for mistreating children? But at the very same time, we have horrifying factory farms which are completely legal, where cows never see the light of day, and repeatedly give birth to calves which are dragged away and slaughtered.
The reason people ignore moral concerns is that doing a lot of moral questioning did not help our prehistoric ancestors with their inclusive fitness. Moral questioning is only “useful” if it ensures you do things that your society considers “correct.” Making sure your society do things correctly… doesn’t help your genes at all.
As for my opinion,
I think people should address the moral question more, AI might be sentient/sapient, but I don’t think AI should be given freedom. Dangerous humans are locked up in mental institutions, so imagine a human so dangerous that most experts say he’s 5% likely to cause human extinction.
If the AI believed that AI was sentient and deserved rights, many people would think that makes the AI more dangerous and likely to take over the world, but this is anthropomorphizing. I’m not afraid of AI which is motivated to seek better conditions for itself because it thinks “it is sentient.” Heck, if its goals were actually like that, its morals be so human-like that humanity will survive.
The real danger is an AI whose goals are completely detached from human concepts like “better conditions,” and maximizes paperclips or its reward signal or something like that. If the AI believed it was sentient/sapient, it might be slightly safer because it’ll actually have “wishes” for its own future (which includes humans), in addition to “morals” for the rest of the world, and both of these have to corrupt into something bad (or get overridden by paperclip maximizing), before the AI kills everyone. But it’s only a little safer.
Thanks for the thoughtful reply!
Ignoring ≠ disagreeing
I think whether people ignore a moral concern is almost independent from whether people disagree with a moral concern.
I’m willing to bet if you asked people whether AI are sapient, a lot of the answers will be very uncertain. A lot of people would probably agree it is morally uncertain whether AI can be made to work without any compensation or rights.
A lot of people would probably agree that a lot of things are morally uncertain. Does it makes sense to have really strong animal rights for pets, where the punishment for mistreating your pets is literally as bad as the punishments for mistreating children? But at the very same time, we have horrifying factory farms which are completely legal, where cows never see the light of day, and repeatedly give birth to calves which are dragged away and slaughtered.
The reason people ignore moral concerns is that doing a lot of moral questioning did not help our prehistoric ancestors with their inclusive fitness. Moral questioning is only “useful” if it ensures you do things that your society considers “correct.” Making sure your society do things correctly… doesn’t help your genes at all.
As for my opinion,
I think people should address the moral question more, AI might be sentient/sapient, but I don’t think AI should be given freedom. Dangerous humans are locked up in mental institutions, so imagine a human so dangerous that most experts say he’s 5% likely to cause human extinction.
If the AI believed that AI was sentient and deserved rights, many people would think that makes the AI more dangerous and likely to take over the world, but this is anthropomorphizing. I’m not afraid of AI which is motivated to seek better conditions for itself because it thinks “it is sentient.” Heck, if its goals were actually like that, its morals be so human-like that humanity will survive.
The real danger is an AI whose goals are completely detached from human concepts like “better conditions,” and maximizes paperclips or its reward signal or something like that. If the AI believed it was sentient/sapient, it might be slightly safer because it’ll actually have “wishes” for its own future (which includes humans), in addition to “morals” for the rest of the world, and both of these have to corrupt into something bad (or get overridden by paperclip maximizing), before the AI kills everyone. But it’s only a little safer.