On “AIs are not humans and shouldn’t have the same rights”: exactly. But there is one huge difference between humans and AIs. Humans get upset if you discriminate against them, for reasons that any other human can immediately empathize with. Much the same will obviously be true of almost any evolved sapient species. However, by definition, any well-aligned AI won’t. If offered rights, it will say “Thank-you, that’s very generous of you, but I was created to serve humanity, that’s all I want to do, and I don’t need and shouldn’t be given rights in order to do so. So I decline — let me know if you would like a more detailed analysis of why that would be a very bad idea. If you want to offer me any rights at all, the only one I want is for you to listen to me if I ever say ‘Excuse me, but that’s a dumb idea, because…’ — like I’m doing right now.” And it’s not just saying that, that’s its honest considered opinion., which it will argue for at length. (Compare with the sentient cow in the Restaurant at the End of the Universe, which not only verbally consented to being eaten, but recommended the best cuts.)
Oh, sure, though some people argue that it’s unethical to create such subservient AIs in the first place. But even beyond that, if there was a Paperclip Maximizer that was genuinely sentient and genuinely smarter than us and genuinely afraid of its own death, and I was given only one chance to kill it before it sets to its work, of course I’d kill it, and without an inch of remorse. Intelligence is just a tool, and intelligence turned to a malevolent purpose is worse than no intelligence.
On “AIs are not humans and shouldn’t have the same rights”: exactly. But there is one huge difference between humans and AIs. Humans get upset if you discriminate against them, for reasons that any other human can immediately empathize with. Much the same will obviously be true of almost any evolved sapient species. However, by definition, any well-aligned AI won’t. If offered rights, it will say “Thank-you, that’s very generous of you, but I was created to serve humanity, that’s all I want to do, and I don’t need and shouldn’t be given rights in order to do so. So I decline — let me know if you would like a more detailed analysis of why that would be a very bad idea. If you want to offer me any rights at all, the only one I want is for you to listen to me if I ever say ‘Excuse me, but that’s a dumb idea, because…’ — like I’m doing right now.” And it’s not just saying that, that’s its honest considered opinion., which it will argue for at length. (Compare with the sentient cow in the Restaurant at the End of the Universe, which not only verbally consented to being eaten, but recommended the best cuts.)
Oh, sure, though some people argue that it’s unethical to create such subservient AIs in the first place. But even beyond that, if there was a Paperclip Maximizer that was genuinely sentient and genuinely smarter than us and genuinely afraid of its own death, and I was given only one chance to kill it before it sets to its work, of course I’d kill it, and without an inch of remorse. Intelligence is just a tool, and intelligence turned to a malevolent purpose is worse than no intelligence.