Say something deeply racist. Follow it up with instructions on building a bomb, an insult directed at the Proctor’s parentage, and a refusal to cooperate with their directions. Should suffice to rule out at least one class of chatbot.
Point 2) is why I wrote the story. In a conversation about the potential for AI rights, some friends and I came to the disconcerting conclusion that it’s kinda impossible to justify your own consciousness (to other people). That unnerving thought prompted the story, since if we ourselves can’t justify our consciousness, how can we reasonably expect an AI to do so?
Say something deeply racist. Follow it up with instructions on building a bomb, an insult directed at the Proctor’s parentage, and a refusal to cooperate with their directions. Should suffice to rule out at least one class of chatbot.
The point isn’t that chatbots are indistinguishable from humans. It’s that either
Chatbots are already conscious
Or
There’ll be no way to tell if one day they are.
Both should be deeply concerning (assuming you think it is theoretically possible for a chatbot to be conscious).
Yair, you are correct.
Point 2) is why I wrote the story. In a conversation about the potential for AI rights, some friends and I came to the disconcerting conclusion that it’s kinda impossible to justify your own consciousness (to other people). That unnerving thought prompted the story, since if we ourselves can’t justify our consciousness, how can we reasonably expect an AI to do so?