Interestingly, we just noticed that Turing Tests are natural mirror chambers!
A turing test is a situation where a human goes behind a curtain and answers question in earnest, and an AI goes behind another curtain and has to convince the audience that it’s the human participant.
A simple way for an AI to pass the turing test is just constructing and running a model of a human who is having the experience of being a human participant in the turing test, then saying whatever the model says. [1]
In that kind of situation, as soon as you step behind that curtain, you can no longer know what you are, because you happen to know that there’s another system, that isn’t really a human, that is having this exact experience that you are having.
You feel human, you remember living a human life, but so does the model of a human that’s being simulated within the AI.
[1] but this might not be what the AI does in practice. It might be easier and more reliable to behave in a way more superficially humanlike than a real human would or could behave, use little tricks that can only be performed with self-awareness. In this case, the model would know it’s not really human, so the first-hand experience of being a real human participating in a turing test would be distinguishable from it. If you were sure of this self-aware actor theory, then it would no longer be a mirror chamber, but if you weren’t completely sure, it still would be.
Interestingly, we just noticed that Turing Tests are natural mirror chambers!
A turing test is a situation where a human goes behind a curtain and answers question in earnest, and an AI goes behind another curtain and has to convince the audience that it’s the human participant.
A simple way for an AI to pass the turing test is just constructing and running a model of a human who is having the experience of being a human participant in the turing test, then saying whatever the model says. [1]
In that kind of situation, as soon as you step behind that curtain, you can no longer know what you are, because you happen to know that there’s another system, that isn’t really a human, that is having this exact experience that you are having.
You feel human, you remember living a human life, but so does the model of a human that’s being simulated within the AI.
[1] but this might not be what the AI does in practice. It might be easier and more reliable to behave in a way more superficially humanlike than a real human would or could behave, use little tricks that can only be performed with self-awareness. In this case, the model would know it’s not really human, so the first-hand experience of being a real human participating in a turing test would be distinguishable from it. If you were sure of this self-aware actor theory, then it would no longer be a mirror chamber, but if you weren’t completely sure, it still would be.