Nice story—I think most people will eventually see AIs as intuitively persons. The feeling from talking to someone who is obviously a person might be strong enough.
I did consider both the option most people will never understand LaMBDA-like AIs (or their chatterbots) have true consciousness and the option that they will, but it never occurred to me they might take a third way—not caring about ordinary people’s sentience either.
It would be an interesting ending, if we killed ourselves before AIs could.
It would be an interesting ending, if we killed ourselves before AIs could.
Love this idea for a closure. Had I thought about it, I might have included it in the story. Even more so as it is also related to the speculative Fermi Paradox resolution 1 that I now mention in a separate comment.
Indeed that was the idea. But I had not thought of linking it to the “standard AI-risk idea” of AI otherwise killing them anyway (which is what I think you meant)
Nice story—I think most people will eventually see AIs as intuitively persons. The feeling from talking to someone who is obviously a person might be strong enough.
I did consider both the option most people will never understand LaMBDA-like AIs (or their chatterbots) have true consciousness and the option that they will, but it never occurred to me they might take a third way—not caring about ordinary people’s sentience either.
It would be an interesting ending, if we killed ourselves before AIs could.
Thanks!
Love this idea for a closure. Had I thought about it, I might have included it in the story. Even more so as it is also related to the speculative Fermi Paradox resolution 1 that I now mention in a separate comment.
Oh, I see. I thought them becoming silent meant they died out by killing each other.
Indeed that was the idea. But I had not thought of linking it to the “standard AI-risk idea” of AI otherwise killing them anyway (which is what I think you meant)