Zombies — of the sort that should be rejected — are structurally identical, not just behaviorally.
I understand that usually when we talk about zombies, we talk about entities structurally identical to human beings. But it’s a question of what level we put the “black box” at. For instance, if we replace every neuron of a human being with a behaviorally identical piece of silicon, we don’t get a zombie. If we replace larger functional units within the brain with “black boxes” that return the same output given the same input, for instance, replacing the amygdala with an “emotion chip”, do we necessarily get something conscious? Is this a zombie of the sort that should be rejected?
A non-conscious entity that can pass the Turing Test, if one existed, wouldn’t even be behaviorally identical with any given human. So it wouldn’t exactly be a zombie. But it does seem to violate a sort of Generalized Anti-Zombie Principle. I’m having trouble with this idea. Are there articles about this elsewhere on this site? EY’s wholesale rejection of the Turing Test seems very odd to me.
(Note that, whether or not zombies can hold conversations, it seems clear that behavior underdetermines the content of consciousness — that I could have different thoughts and different subjective experiences, but behave the same in all or almost all situations.)
This doesn’t seem at all clear. In all situations? So we have two entities, A and Z, and we run them through a battery of tests, resetting them to base states after each one. Every imaginable test we run, whatever we say to them, whatever strange situation we put them in, they respond in identical ways. I can see that it might be possible for them to have a few lines of code different. But for one to be conscious and one not to be? To say that consciousness can literally have zero effects on behavior seems very close to admitting the existence of zombies.
Maybe I should post in the Open Thread, or start a new post on this.
Are there articles about this elsewhere on this site?
The Generalized Anti-Zombie Principle vs The Giant Lookup Table Summary of the assertions made therein: If a given black box passes the Turing Test, that is very good evidence that there were conscious humans somewhere in the causal chain that led to the responses you’re judging. However, that consciousness is not necessarily in the box.
I understand that usually when we talk about zombies, we talk about entities structurally identical to human beings. But it’s a question of what level we put the “black box” at. For instance, if we replace every neuron of a human being with a behaviorally identical piece of silicon, we don’t get a zombie. If we replace larger functional units within the brain with “black boxes” that return the same output given the same input, for instance, replacing the amygdala with an “emotion chip”, do we necessarily get something conscious? Is this a zombie of the sort that should be rejected?
A non-conscious entity that can pass the Turing Test, if one existed, wouldn’t even be behaviorally identical with any given human. So it wouldn’t exactly be a zombie. But it does seem to violate a sort of Generalized Anti-Zombie Principle. I’m having trouble with this idea. Are there articles about this elsewhere on this site? EY’s wholesale rejection of the Turing Test seems very odd to me.
This doesn’t seem at all clear. In all situations? So we have two entities, A and Z, and we run them through a battery of tests, resetting them to base states after each one. Every imaginable test we run, whatever we say to them, whatever strange situation we put them in, they respond in identical ways. I can see that it might be possible for them to have a few lines of code different. But for one to be conscious and one not to be? To say that consciousness can literally have zero effects on behavior seems very close to admitting the existence of zombies.
Maybe I should post in the Open Thread, or start a new post on this.
The Generalized Anti-Zombie Principle vs The Giant Lookup Table
Summary of the assertions made therein: If a given black box passes the Turing Test, that is very good evidence that there were conscious humans somewhere in the causal chain that led to the responses you’re judging. However, that consciousness is not necessarily in the box.
Exactly what i was looking for! Thank you so much.