How do you know that? What evidence or reasoning caused you to reach that conclusion? (And “necessarily”, no less.)
I would tentatively guess that most AGIs that pass the Turing test wouldn’t be conscious in the ‘moral patient’ sense of consciousness. But for an especially obvious example of this, consider an unrealistically large lookup table. (Perhaps even one tailor-made for the specific conversation at hand.)
I have many reasons to think the Turing test is equivalent to consciousness.
Probably the most intuitive idea I can think of, why consciousness should be defined through the Turing test (rather than some other way) is to consider the hypothetical situation of my information processing changing in a way that would influence my consciousness, but couldn’t, even in principle, influence my behavior. In that case, I could still say out loud my consciousness changed, which contradicts the assumption that the change in the information processing can have no influence on my behavior (and further, that there is such a change in the information processing that could influence the consciousness but not the behavior).
But that only tells me the qualia can’t be any different if the behavior stays constant. We still have to consider how do we know the change in the internal processing can’t switch qualia to null (in which case there is nobody inside who could say the difference out loud, because there is nobody inside at all).
In that case, I believe we could do an analogy of gradual replacement to show that this would result either in fading or in gradually suddenly disappearing qualia, making it highly implausible.
Etc.
an unrealistically large lookup table.
A lookup table doesn’t pass the Turing test, because its response can’t depend on what was said previously in the conversation. We could add a counter to it and hardcode all possible responses depending on the entirety of the conversation up to n (then the system has to shut down), so it can only pass the Turing test if the length of the conversation is limited, but then it would have consciousness (it also wouldn’t fit into our universe, but we can imagine making the universe larger).
It might not sound intuitive that an input-output transformation in a Turing-test-passing lookup table + counter has consciousness, but (without knowing it’s the information processing that creates consciousness) it also seems not intuitive that electricity running between neurons according to certain rules has consciousness, and in this case, the various philosophical considerations supersede counterintuitiveness (possibly; I can’t actually speak for people who find that counterintuitive, because I don’t).
(Perhaps even one tailor-made for the specific conversation at hand.)
That’s not possible because we can’t know what the adversary says in advance, and if the adversary follows a script, it’s not the Turing test anymore.
I guess we could simulate the adversary, but then we need to generate the output of a person in our head to find out what to answer to the simulated adversary (so that we can write it down to hardcode it), which is the act that generates the corresponding qualia, so this is something that can’t be escaped.
In any case, learning in advance what the adversary will say in the conversation breaks the spirit of the test, so I believe this should be removable by phrasing the rules more carefully.
Probably the most intuitive idea I can think of, why consciousness should be defined through the Turing test (rather than some other way) is to consider the hypothetical situation of my information processing changing in a way that would influence my consciousness, but couldn’t, even in principle, influence my behavior. In that case, I could still say out loud my consciousness changed,
No, you couldn’t say it out loud if the change to you information processing preserves your input-output relations.
I’m talking specifically about an information-processing change that
Preserves the input-output relations
Changes my consciousness
I can’t mention it out loud
Since I can mention out loud every change that happens to my consciousness, there is no information-processing change that would fit simultaneously (1), (2) and (3). But such an information-processing change must exist for the definition of consciousness in any other way than the Turing test to be meaningful and self-consistent. Since it doesn’t, it follows the only meaningful and self-consistent definition of consciousness is through the Turing test.
Since I can mention out loud every change that happens to my consciousness
Again, that’s a free will assumption. Changes that preserve function , as in 1, will prevent you saying “I just lost my qualia” under external circumstances where you would not say that.
No, that works even under the assumption of compatibilism (and, by extension, incompatibilism). (Conversely, if I couldn’t comment out loud on my consciousness because my brain was preventing me from saying it, not even contracausal free will would help me (any more than a stroke victim could use their hypothetical contracausal free will to speak).)
I don’t understand why would you think anything I was saying was connected to free will at all.
“I just lost my qualia”
If you finish reading my comment that you originally responded to, you’ll find out that I dealt with the possibility of us losing qualia while preserving outwards behavior as a separate case.
If you are a deterministic algorithm, there is only one thing you can ever do at any point in time because that’s what deterministic means.
If you are a functional-preserving variation of a deterministic algorithm, you will detrministically do the same thing...produce the same output for a given input …because that’s what function preserving means.
So if the unmodified you answers “yes” to “do I have qualia”, the modified version will, whether it has them or not.
There’s no ghost in the machine that’s capable of noticing the change and taking over the vocal chords.
If you’re not an algorithm, no one could make a functional duplicate.
So if the unmodified you answers “yes” to “do I have qualia”, the modified version will, whether it has them or not.
My point is that such a modification that preserves behavior but removes qualia is impossible-in-principle. So we don’t need to consider what such a version would say, since such a version can’t exist in the first place.
That’s not a counterargument though. (Unless you have a proof for your own position, in which case it wouldn’t be enough for me to have an intuition pump.)
To see why these don’t make sense, one needs to flesh them out in more detail (like, what complex information processing/specific physics specifically, etc.). If they’re kept in the form of a short phrase, it’s not immediately obvious (it’s why I used the phrase “write one out in full”).
How do you know that? What evidence or reasoning caused you to reach that conclusion? (And “necessarily”, no less.)
I would tentatively guess that most AGIs that pass the Turing test wouldn’t be conscious in the ‘moral patient’ sense of consciousness. But for an especially obvious example of this, consider an unrealistically large lookup table. (Perhaps even one tailor-made for the specific conversation at hand.)
I have many reasons to think the Turing test is equivalent to consciousness.
Probably the most intuitive idea I can think of, why consciousness should be defined through the Turing test (rather than some other way) is to consider the hypothetical situation of my information processing changing in a way that would influence my consciousness, but couldn’t, even in principle, influence my behavior. In that case, I could still say out loud my consciousness changed, which contradicts the assumption that the change in the information processing can have no influence on my behavior (and further, that there is such a change in the information processing that could influence the consciousness but not the behavior).
But that only tells me the qualia can’t be any different if the behavior stays constant. We still have to consider how do we know the change in the internal processing can’t switch qualia to null (in which case there is nobody inside who could say the difference out loud, because there is nobody inside at all).
In that case, I believe we could do an analogy of gradual replacement to show that this would result either in fading or in
graduallysuddenly disappearing qualia, making it highly implausible.Etc.
A lookup table doesn’t pass the Turing test, because its response can’t depend on what was said previously in the conversation. We could add a counter to it and hardcode all possible responses depending on the entirety of the conversation up to n (then the system has to shut down), so it can only pass the Turing test if the length of the conversation is limited, but then it would have consciousness (it also wouldn’t fit into our universe, but we can imagine making the universe larger).
It might not sound intuitive that an input-output transformation in a Turing-test-passing lookup table + counter has consciousness, but (without knowing it’s the information processing that creates consciousness) it also seems not intuitive that electricity running between neurons according to certain rules has consciousness, and in this case, the various philosophical considerations supersede counterintuitiveness (possibly; I can’t actually speak for people who find that counterintuitive, because I don’t).
That’s not possible because we can’t know what the adversary says in advance, and if the adversary follows a script, it’s not the Turing test anymore.
I guess we could simulate the adversary, but then we need to generate the output of a person in our head to find out what to answer to the simulated adversary (so that we can write it down to hardcode it), which is the act that generates the corresponding qualia, so this is something that can’t be escaped.
In any case, learning in advance what the adversary will say in the conversation breaks the spirit of the test, so I believe this should be removable by phrasing the rules more carefully.
No, you couldn’t say it out loud if the change to you information processing preserves your input-output relations.
I’m talking specifically about an information-processing change that
Preserves the input-output relations
Changes my consciousness
I can’t mention it out loud
Since I can mention out loud every change that happens to my consciousness, there is no information-processing change that would fit simultaneously (1), (2) and (3). But such an information-processing change must exist for the definition of consciousness in any other way than the Turing test to be meaningful and self-consistent. Since it doesn’t, it follows the only meaningful and self-consistent definition of consciousness is through the Turing test.
(This is just one of many reasons, by the way.)
Again, that’s a free will assumption. Changes that preserve function , as in 1, will prevent you saying “I just lost my qualia” under external circumstances where you would not say that.
No, that works even under the assumption of compatibilism (and, by extension, incompatibilism). (Conversely, if I couldn’t comment out loud on my consciousness because my brain was preventing me from saying it, not even contracausal free will would help me (any more than a stroke victim could use their hypothetical contracausal free will to speak).)
I don’t understand why would you think anything I was saying was connected to free will at all.
If you finish reading my comment that you originally responded to, you’ll find out that I dealt with the possibility of us losing qualia while preserving outwards behavior as a separate case.
ETA: Link fixed.
What’s the difference between your brain and you?
If you are a deterministic algorithm, there is only one thing you can ever do at any point in time because that’s what deterministic means.
If you are a functional-preserving variation of a deterministic algorithm, you will detrministically do the same thing...produce the same output for a given input …because that’s what function preserving means.
So if the unmodified you answers “yes” to “do I have qualia”, the modified version will, whether it has them or not.
There’s no ghost in the machine that’s capable of noticing the change and taking over the vocal chords.
If you’re not an algorithm, no one could make a functional duplicate.
My point is that such a modification that preserves behavior but removes qualia is impossible-in-principle. So we don’t need to consider what such a version would say, since such a version can’t exist in the first place.
The gradual replacement argument is an intuition pump not a proof.
That’s not a counterargument though. (Unless you have a proof for your own position, in which case it wouldn’t be enough for me to have an intuition pump.)
It’s a counterargument to. “It’s necessarily true that...” .
It is, in fact, necessarily true. There is no other option. (A good exercise is to try to write one out (in full), to see that it makes no sense.)
“Consciousness supervenes on complex information processing”.
“Consciousness supervenes on specific physics” .
To see why these don’t make sense, one needs to flesh them out in more detail (like, what complex information processing/specific physics specifically, etc.). If they’re kept in the form of a short phrase, it’s not immediately obvious (it’s why I used the phrase “write one out in full”).
I think the burden is on you. Bear in mind Ive been thinking about this stuff for a long time.
And if you provide such a fleshed-out idea in the future, I’ll be happy to uphold that burden.