If there was a repository of philosophical work along those lines—not concerned with defending basic ideas like anti-zombieism, but with accepting those basic ideas and moving on to challenge more difficult quests of naturalism and cognitive reductionism—then that, I might well be interested in reading. But I don’t know who, besides a few heroes, would be able to compile such a repository—who else would see a modal logic as an obvious bounce-off-the-mystery.
One of the facts of modern philosophy is that zombieism has not been resolved in a satisfactory manner. You can’t simply claim that one idea is the most accurate one and run with it, because then you’re using an intermediate argument of dubious provenance. You could avoid the question altogether, particularly in AI design: If two programs have identical outputs across the entire range of meaningful input, then it cannot be the case that one is self-aware and the other is not.
It certainly has been resolved. At least to the degree that anyone with a lick of sense can look at the pro-zombie arguments and say ‘that is blatantly unphysical nonsense.’ We can talk about consciousness. The atoms that make up my fingers can interact with the atoms that make up my nervous system, which can in turn, brain, etc, etc, and this unbroken causal chain make me talk about feeling conscious inside, which I do. We may not know how it works computationally, but we do know that it’s something in the brain that’s doing it. Or, at least, something is making us talk about consciousness. It is possible, I suppose, that the thing that makes us conscious is different from the thing that makes us talk about consciousness—but there’s certainly no evidence for it, and it’s a damned silly idea in any case. So, as far as the naive ‘zombies physically identical to humans’ goes, if you don’t consider the idea shot down, decapitated, plucked, gutted, and served for Christmas dinner, then that tells you more about the flaws in your criteria for drawing a conclusion than anything else.
There are some related questions worth exploring—like, for instance, once we figure out how consciousness works on a mechanical level, we can answer the question of whether or not it’s possible to build a piece of software that reasonably impersonates a human being without having subjective experience. That’s an interesting question. But the classical philosophical zombies are, frankly, stupid.
It is possible, I suppose, that the thing that makes us conscious is different from the thing that makes us talk about consciousness—but there’s certainly no evidence for it, and it’s a damned silly idea in any case.
True, but it seems to me almost trivially true that explaining why we talk about consciousness makes a theory positing that we “are conscious” otiose. What other evidence is there? What other evidence could there be? The profession of belief in mysterious “raw experience” merely expresses a cognitive bias, the acceptance of which should be a deep embarrassment to exponents who call themselves rationalist.
The term “self-awareness,” however, is quite misleading. I can have awareness of my inner states—some knowledge about what I’m thinking—without having mysterious raw experience. “Self-awareness” here is used by raw-experience believers to mean something special: knowledge of “what it is like to be me.” (Thomas Nagel.) The ambiguous usage of self-awareness obfuscates the problem, making belief in “raw experience” seem reasonable, when it’s really a believed (and beloved) superstition.
Now, given an explanation for how subjective experience occurs, determine if a given physical entity has subjective experience. What would be different in your observations if I did not have subjective experiences?
Give me that explanation, and I’ll tell you. It’s clearly some kind of computational / information process, but it’s not clear exactly what’s going on there. It has a serve a survival purpose, or else we wouldn’t have it. We’ll probably be able to conduct experiments and find out down the line, but it’s tough right now. I also suspect that subjective experience isn’t a linear cutoff. It’s probably a gradient of depth of insight that extends to organisms with simpler nervous systems and extends, at least in principle, past humans. But that’s speculation on my part.
It isn’t an information process, it’s a chemical process- because information can’t trigger a neuron.
I see no reason why subjective experience needs to have had a survival purpose in the past; isn’t it also possible that self-awareness was a contra-survival byproduct of some other function, which was prosurvivial in the distant past? I don’t think that sentience is the appendix of the mind, but “because we have it” isn’t in the list of evidence against that hypothesis.
Suppose that we figured out how the enconding of the sensory and motor nerves, such that we could interpret them and duplicate them: Then we put a human brain in a box, wired it to false nerves and provided it with an internally consistent set of sensory inputs that reacted to the motor outputs. I see no reason why that brain would have less subjective experience in that state than normal. (If you do, then disagree with me on this point, and it becomes open to verification)
Take the other example- a computer which can pass the Turing test is wired into a human body, taking the sensory nervous inputs as its inputs and the motor nerve outputs as its outputs, and other humans cannot tell without inspecting inside the skull that it is an artificial computer. Depending on your position on zombieism, this entity may or may not have subjective experience.
Now, take the zombie computer, and hook it up to the false nervous inputs:
If it didn’t have subjective experience in a real body, then it doesn’t have it now; if so, why does a human brain have subjective experience, given that it takes the same inputs and provides the same outputs?
If It did have subjective experience in a real body, but doesn’t have it now, why the change, since nothing within the entity to be tested is different?
If it still has subjective experience, then at least one computer simulation of a human interacting with a computer simulation of a world has subjective experience. Why is it not the case of all such simulations?
One of the facts of modern philosophy is that zombieism has not been resolved in a satisfactory manner. You can’t simply claim that one idea is the most accurate one and run with it, because then you’re using an intermediate argument of dubious provenance. You could avoid the question altogether, particularly in AI design: If two programs have identical outputs across the entire range of meaningful input, then it cannot be the case that one is self-aware and the other is not.
It certainly has been resolved. At least to the degree that anyone with a lick of sense can look at the pro-zombie arguments and say ‘that is blatantly unphysical nonsense.’ We can talk about consciousness. The atoms that make up my fingers can interact with the atoms that make up my nervous system, which can in turn, brain, etc, etc, and this unbroken causal chain make me talk about feeling conscious inside, which I do. We may not know how it works computationally, but we do know that it’s something in the brain that’s doing it. Or, at least, something is making us talk about consciousness. It is possible, I suppose, that the thing that makes us conscious is different from the thing that makes us talk about consciousness—but there’s certainly no evidence for it, and it’s a damned silly idea in any case. So, as far as the naive ‘zombies physically identical to humans’ goes, if you don’t consider the idea shot down, decapitated, plucked, gutted, and served for Christmas dinner, then that tells you more about the flaws in your criteria for drawing a conclusion than anything else.
There are some related questions worth exploring—like, for instance, once we figure out how consciousness works on a mechanical level, we can answer the question of whether or not it’s possible to build a piece of software that reasonably impersonates a human being without having subjective experience. That’s an interesting question. But the classical philosophical zombies are, frankly, stupid.
True, but it seems to me almost trivially true that explaining why we talk about consciousness makes a theory positing that we “are conscious” otiose. What other evidence is there? What other evidence could there be? The profession of belief in mysterious “raw experience” merely expresses a cognitive bias, the acceptance of which should be a deep embarrassment to exponents who call themselves rationalist.
The term “self-awareness,” however, is quite misleading. I can have awareness of my inner states—some knowledge about what I’m thinking—without having mysterious raw experience. “Self-awareness” here is used by raw-experience believers to mean something special: knowledge of “what it is like to be me.” (Thomas Nagel.) The ambiguous usage of self-awareness obfuscates the problem, making belief in “raw experience” seem reasonable, when it’s really a believed (and beloved) superstition.
Now, given an explanation for how subjective experience occurs, determine if a given physical entity has subjective experience. What would be different in your observations if I did not have subjective experiences?
Give me that explanation, and I’ll tell you. It’s clearly some kind of computational / information process, but it’s not clear exactly what’s going on there. It has a serve a survival purpose, or else we wouldn’t have it. We’ll probably be able to conduct experiments and find out down the line, but it’s tough right now. I also suspect that subjective experience isn’t a linear cutoff. It’s probably a gradient of depth of insight that extends to organisms with simpler nervous systems and extends, at least in principle, past humans. But that’s speculation on my part.
It isn’t an information process, it’s a chemical process- because information can’t trigger a neuron.
I see no reason why subjective experience needs to have had a survival purpose in the past; isn’t it also possible that self-awareness was a contra-survival byproduct of some other function, which was prosurvivial in the distant past? I don’t think that sentience is the appendix of the mind, but “because we have it” isn’t in the list of evidence against that hypothesis.
Suppose that we figured out how the enconding of the sensory and motor nerves, such that we could interpret them and duplicate them: Then we put a human brain in a box, wired it to false nerves and provided it with an internally consistent set of sensory inputs that reacted to the motor outputs. I see no reason why that brain would have less subjective experience in that state than normal. (If you do, then disagree with me on this point, and it becomes open to verification)
Take the other example- a computer which can pass the Turing test is wired into a human body, taking the sensory nervous inputs as its inputs and the motor nerve outputs as its outputs, and other humans cannot tell without inspecting inside the skull that it is an artificial computer. Depending on your position on zombieism, this entity may or may not have subjective experience.
Now, take the zombie computer, and hook it up to the false nervous inputs: If it didn’t have subjective experience in a real body, then it doesn’t have it now; if so, why does a human brain have subjective experience, given that it takes the same inputs and provides the same outputs? If It did have subjective experience in a real body, but doesn’t have it now, why the change, since nothing within the entity to be tested is different? If it still has subjective experience, then at least one computer simulation of a human interacting with a computer simulation of a world has subjective experience. Why is it not the case of all such simulations?