You’re correct that it doesn’t a priori prohibit such a thing. It does, however, bring my prior probability of encountering such a thing vanishingly low. Faced with an event that somehow causes me to update that vanishingly small probability to the point of convincing me it’s true I am vastly surprised, and that vast surprise colors all of my intuitions about interactions with nominally intelligent systems. Given that, it’s not clear to me why I should keep believing that I was having an intelligent conversation a moment earlier.
...and that vast surprise colors all of my intuitions about interactions with nominally intelligent systems. Given that, it’s not clear to me why I should keep believing that I was having an intelligent conversation a moment earlier.
What do you mean by “intelligent conversation” ? Do you mean, “a conversation with an intelligent agent”, or “a conversation whose contents satisfy certain criteria”, and if so, which ones ? I’ll assume you mean the former for now.
Let’s say that you had a text-only chat with the agent, and found it intellectually stimulating. You thought that the agent was responding quite cleverly to your comments, had a distinct “writer’s voice”, etc.
Now, let’s imagine two separate worlds. In world A, you learned that the agent was in fact a GLUT. Surprised and confused, you confronted it in conversation, and it responded to your comments as it did before, with apparent intelligence and wit. But, of course, now you knew better than to fall for such obvious attempts to fool you.
In world B, the exact same thing happened, and the rest of your conversation proceeded as before, with one minor difference: unbeknownst to you, the person who told you that the agent was a GLUT was himself a troll. He totally gaslighted you. The agent isn’t a GLUT or even an AI; it’s just a regular human (and the troll’s accomplice), a la the good old Mechanical Turk.
It sounds to me like if you were in world B, you’d still disbelieve that you were having a conversation with an intelligent agent. But in world B, you’d be wrong. In world A, you would of course be right.
Is there any way for you to tell which world you’re in (I mean, without waterboarding that pesky troll or taking apart the Chinese Room to see who’s inside, etc.) ? If there is no way for you to tell the difference, then what’s the difference ?
By the way, I do agree with you that, in our real world, the probability of such a GLUT existing is pretty much zero. I am merely questioning the direction of your (hypothetical) belief update.
By construction, there’s no way for me to tell… that is, I’ve already posited that some event (somehow, implausibly) convinced me my interlocutor is a GLUT.
In world A, “I” was correct to be convinced; my interlocutor really was (somehow, implausibly) a GLUT, impossible as that seems. In world B, “I” was (somehow, implausibly) incorrectly convinced.
There’s all kinds of things I can be fooled about, and knowing that I can be fooled about those things should (and does) make me more difficult to convince of them. But if, even taking that increased skeptcism into account, I’m convinced anyway… well, what more is there to say? At that point I’ve been (somehow, implausibly) convinced, and should behave accordingly.
To say “Even if I’ve (somehow, implausibly) been exposed to a convincing event, I don’t update my beliefs” is simply another way of saying that no such convincing event can exist—of fighting the hypothetical.
Mind you, I agree that no such convincing event can exist, and that the hypothetical simply is not going to happen. But that’s precisely my point: if it does anyway, then I am clearly deeply confused about how the universe works; I should at that point sharply lower my confidence in all judgments even vaguely related to the nonexistence of GLUT Chinese Rooms, including “I can tell whether I’m talking to an intelligent system just by talking to them”.
The extent of my confidence in X ought to be proportional to the extent of my confusion if I come (somehow) to believe that X is false.
I think I see what you’re saying—discovering that something as unlikely as a GLUT actually exists would shake your beliefs in pretty much everything, including the Turing Test. This position makes sense, but I think it’s somewhat orthogonal to the current topic. Presumably, you’d feel the same way if you became convinced that gods exist, or that Pi has a finite number of digits after all, or something.
discovering that something as unlikely as a GLUT actually exists would shake your beliefs in pretty much everything, including the Turing Test
Not quite.
Discovering that something as unlikely as a conversation-having GLUT exists would shake my beliefs in everything related to conversation-having GLUTs. My confidence that I’m wearing socks right now would not decrease much, but my confidence that I can usefully infer attributes of a system by conversing with it would decrease enormously. Since Turing Tests are directly about the latter, my confidence about Turing Tests would also decrease enormously.
More generally, any event that causes me to sharply alter my confidence in a proposition P will also tend to alter my confidence in other propositions related to P, to an extent proportional to their relation.
An event which made me confident that pi was a terminating decimal after all, or that some religion’s account of its god(s) was accurate, etc. probably would not reduce my confidence in the Turing Test nearly as much, though it would reduce my confidence in other things more.
My confidence that I’m wearing socks right now would not decrease much...
Why not ? Encountering a bona-fide GLUT that could pass the Turing test would be tantamount to a miracle. I personally would begin questioning everything if something like that were to happen. After all, socks are objects that I had previously thought of as “physical”, but the GLUT would shake the very notions of what a “physical” object even is.
Since Turing Tests are directly about the latter, my confidence about Turing Tests would also decrease enormously.
my confidence about Turing Tests would also decrease enormously. Why that, and not your confidence about GLUTs ?
Of course my confidence about GLUTs would also decrease enormously in this scenario… sorry if that wasn’t clear.
More generally, my point here is that a conversation-having GLUT would not alter my confidence in all propositions equally, but rather would alter my confidence in propositions to a degree proportional to their relation to conversation-having GLUTs, and “I can usefully infer attributes of a system by conversing with it” (P1) is far more closely related to conversation-having GLUTs than “I’m wearing socks” (P2).
If your point is that my confidence in P2 should nevertheless be significant, even if much less than P1… well, maybe. Offhand, I’m not sure my brain is capable of spanning a broad enough span of orders-of-magnitude of confidence-shift to be able to consistently represent the updates of both P1 and P2, but I’m not confident either way.
You’re correct that it doesn’t a priori prohibit such a thing. It does, however, bring my prior probability of encountering such a thing vanishingly low. Faced with an event that somehow causes me to update that vanishingly small probability to the point of convincing me it’s true I am vastly surprised, and that vast surprise colors all of my intuitions about interactions with nominally intelligent systems. Given that, it’s not clear to me why I should keep believing that I was having an intelligent conversation a moment earlier.
What do you mean by “intelligent conversation” ? Do you mean, “a conversation with an intelligent agent”, or “a conversation whose contents satisfy certain criteria”, and if so, which ones ? I’ll assume you mean the former for now.
Let’s say that you had a text-only chat with the agent, and found it intellectually stimulating. You thought that the agent was responding quite cleverly to your comments, had a distinct “writer’s voice”, etc.
Now, let’s imagine two separate worlds. In world A, you learned that the agent was in fact a GLUT. Surprised and confused, you confronted it in conversation, and it responded to your comments as it did before, with apparent intelligence and wit. But, of course, now you knew better than to fall for such obvious attempts to fool you.
In world B, the exact same thing happened, and the rest of your conversation proceeded as before, with one minor difference: unbeknownst to you, the person who told you that the agent was a GLUT was himself a troll. He totally gaslighted you. The agent isn’t a GLUT or even an AI; it’s just a regular human (and the troll’s accomplice), a la the good old Mechanical Turk.
It sounds to me like if you were in world B, you’d still disbelieve that you were having a conversation with an intelligent agent. But in world B, you’d be wrong. In world A, you would of course be right.
Is there any way for you to tell which world you’re in (I mean, without waterboarding that pesky troll or taking apart the Chinese Room to see who’s inside, etc.) ? If there is no way for you to tell the difference, then what’s the difference ?
By the way, I do agree with you that, in our real world, the probability of such a GLUT existing is pretty much zero. I am merely questioning the direction of your (hypothetical) belief update.
By construction, there’s no way for me to tell… that is, I’ve already posited that some event (somehow, implausibly) convinced me my interlocutor is a GLUT.
In world A, “I” was correct to be convinced; my interlocutor really was (somehow, implausibly) a GLUT, impossible as that seems. In world B, “I” was (somehow, implausibly) incorrectly convinced.
There’s all kinds of things I can be fooled about, and knowing that I can be fooled about those things should (and does) make me more difficult to convince of them. But if, even taking that increased skeptcism into account, I’m convinced anyway… well, what more is there to say? At that point I’ve been (somehow, implausibly) convinced, and should behave accordingly.
To say “Even if I’ve (somehow, implausibly) been exposed to a convincing event, I don’t update my beliefs” is simply another way of saying that no such convincing event can exist—of fighting the hypothetical.
Mind you, I agree that no such convincing event can exist, and that the hypothetical simply is not going to happen. But that’s precisely my point: if it does anyway, then I am clearly deeply confused about how the universe works; I should at that point sharply lower my confidence in all judgments even vaguely related to the nonexistence of GLUT Chinese Rooms, including “I can tell whether I’m talking to an intelligent system just by talking to them”.
The extent of my confidence in X ought to be proportional to the extent of my confusion if I come (somehow) to believe that X is false.
I think I see what you’re saying—discovering that something as unlikely as a GLUT actually exists would shake your beliefs in pretty much everything, including the Turing Test. This position makes sense, but I think it’s somewhat orthogonal to the current topic. Presumably, you’d feel the same way if you became convinced that gods exist, or that Pi has a finite number of digits after all, or something.
Not quite.
Discovering that something as unlikely as a conversation-having GLUT exists would shake my beliefs in everything related to conversation-having GLUTs. My confidence that I’m wearing socks right now would not decrease much, but my confidence that I can usefully infer attributes of a system by conversing with it would decrease enormously. Since Turing Tests are directly about the latter, my confidence about Turing Tests would also decrease enormously.
More generally, any event that causes me to sharply alter my confidence in a proposition P will also tend to alter my confidence in other propositions related to P, to an extent proportional to their relation.
An event which made me confident that pi was a terminating decimal after all, or that some religion’s account of its god(s) was accurate, etc. probably would not reduce my confidence in the Turing Test nearly as much, though it would reduce my confidence in other things more.
Why not ? Encountering a bona-fide GLUT that could pass the Turing test would be tantamount to a miracle. I personally would begin questioning everything if something like that were to happen. After all, socks are objects that I had previously thought of as “physical”, but the GLUT would shake the very notions of what a “physical” object even is.
Why that, and not your confidence about GLUTs ?
Of course my confidence about GLUTs would also decrease enormously in this scenario… sorry if that wasn’t clear.
More generally, my point here is that a conversation-having GLUT would not alter my confidence in all propositions equally, but rather would alter my confidence in propositions to a degree proportional to their relation to conversation-having GLUTs, and “I can usefully infer attributes of a system by conversing with it” (P1) is far more closely related to conversation-having GLUTs than “I’m wearing socks” (P2).
If your point is that my confidence in P2 should nevertheless be significant, even if much less than P1… well, maybe. Offhand, I’m not sure my brain is capable of spanning a broad enough span of orders-of-magnitude of confidence-shift to be able to consistently represent the updates of both P1 and P2, but I’m not confident either way.