Probably the most intuitive idea I can think of, why consciousness should be defined through the Turing test (rather than some other way) is to consider the hypothetical situation of my information processing changing in a way that would influence my consciousness, but couldn’t, even in principle, influence my behavior. In that case, I could still say out loud my consciousness changed,
No, you couldn’t say it out loud if the change to you information processing preserves your input-output relations.
I’m talking specifically about an information-processing change that
Preserves the input-output relations
Changes my consciousness
I can’t mention it out loud
Since I can mention out loud every change that happens to my consciousness, there is no information-processing change that would fit simultaneously (1), (2) and (3). But such an information-processing change must exist for the definition of consciousness in any other way than the Turing test to be meaningful and self-consistent. Since it doesn’t, it follows the only meaningful and self-consistent definition of consciousness is through the Turing test.
Since I can mention out loud every change that happens to my consciousness
Again, that’s a free will assumption. Changes that preserve function , as in 1, will prevent you saying “I just lost my qualia” under external circumstances where you would not say that.
No, that works even under the assumption of compatibilism (and, by extension, incompatibilism). (Conversely, if I couldn’t comment out loud on my consciousness because my brain was preventing me from saying it, not even contracausal free will would help me (any more than a stroke victim could use their hypothetical contracausal free will to speak).)
I don’t understand why would you think anything I was saying was connected to free will at all.
“I just lost my qualia”
If you finish reading my comment that you originally responded to, you’ll find out that I dealt with the possibility of us losing qualia while preserving outwards behavior as a separate case.
If you are a deterministic algorithm, there is only one thing you can ever do at any point in time because that’s what deterministic means.
If you are a functional-preserving variation of a deterministic algorithm, you will detrministically do the same thing...produce the same output for a given input …because that’s what function preserving means.
So if the unmodified you answers “yes” to “do I have qualia”, the modified version will, whether it has them or not.
There’s no ghost in the machine that’s capable of noticing the change and taking over the vocal chords.
If you’re not an algorithm, no one could make a functional duplicate.
So if the unmodified you answers “yes” to “do I have qualia”, the modified version will, whether it has them or not.
My point is that such a modification that preserves behavior but removes qualia is impossible-in-principle. So we don’t need to consider what such a version would say, since such a version can’t exist in the first place.
That’s not a counterargument though. (Unless you have a proof for your own position, in which case it wouldn’t be enough for me to have an intuition pump.)
To see why these don’t make sense, one needs to flesh them out in more detail (like, what complex information processing/specific physics specifically, etc.). If they’re kept in the form of a short phrase, it’s not immediately obvious (it’s why I used the phrase “write one out in full”).
No, you couldn’t say it out loud if the change to you information processing preserves your input-output relations.
I’m talking specifically about an information-processing change that
Preserves the input-output relations
Changes my consciousness
I can’t mention it out loud
Since I can mention out loud every change that happens to my consciousness, there is no information-processing change that would fit simultaneously (1), (2) and (3). But such an information-processing change must exist for the definition of consciousness in any other way than the Turing test to be meaningful and self-consistent. Since it doesn’t, it follows the only meaningful and self-consistent definition of consciousness is through the Turing test.
(This is just one of many reasons, by the way.)
Again, that’s a free will assumption. Changes that preserve function , as in 1, will prevent you saying “I just lost my qualia” under external circumstances where you would not say that.
No, that works even under the assumption of compatibilism (and, by extension, incompatibilism). (Conversely, if I couldn’t comment out loud on my consciousness because my brain was preventing me from saying it, not even contracausal free will would help me (any more than a stroke victim could use their hypothetical contracausal free will to speak).)
I don’t understand why would you think anything I was saying was connected to free will at all.
If you finish reading my comment that you originally responded to, you’ll find out that I dealt with the possibility of us losing qualia while preserving outwards behavior as a separate case.
ETA: Link fixed.
What’s the difference between your brain and you?
If you are a deterministic algorithm, there is only one thing you can ever do at any point in time because that’s what deterministic means.
If you are a functional-preserving variation of a deterministic algorithm, you will detrministically do the same thing...produce the same output for a given input …because that’s what function preserving means.
So if the unmodified you answers “yes” to “do I have qualia”, the modified version will, whether it has them or not.
There’s no ghost in the machine that’s capable of noticing the change and taking over the vocal chords.
If you’re not an algorithm, no one could make a functional duplicate.
My point is that such a modification that preserves behavior but removes qualia is impossible-in-principle. So we don’t need to consider what such a version would say, since such a version can’t exist in the first place.
The gradual replacement argument is an intuition pump not a proof.
That’s not a counterargument though. (Unless you have a proof for your own position, in which case it wouldn’t be enough for me to have an intuition pump.)
It’s a counterargument to. “It’s necessarily true that...” .
It is, in fact, necessarily true. There is no other option. (A good exercise is to try to write one out (in full), to see that it makes no sense.)
“Consciousness supervenes on complex information processing”.
“Consciousness supervenes on specific physics” .
To see why these don’t make sense, one needs to flesh them out in more detail (like, what complex information processing/specific physics specifically, etc.). If they’re kept in the form of a short phrase, it’s not immediately obvious (it’s why I used the phrase “write one out in full”).
I think the burden is on you. Bear in mind Ive been thinking about this stuff for a long time.
And if you provide such a fleshed-out idea in the future, I’ll be happy to uphold that burden.