But a common theme seems to be that blueness is a “feel” somehow “associated” with the entity, or even associated with being the entity. To see blue is how it feels to have your neurons firing that way.
This is the dualism which doesn’t know it’s dualism.
As a reductionist who disagrees with your overall critique of reductionism, I have to say that you hit the nail on the head here. Some self-styled reductionists do seem prone to “explaining” subjective experience by saying that it’s nothing more than what certain algorithms feel like from the inside. As you say, that’s really a dualist account if you leave it there.
My problem is I don’t see how you can avoid a “that’s how an algorithm feels from the inside” explanation somewhere down the line. Even if you create some theory that purports to account for the (say) mysterious redness of red, isn’t there still a gap to bridge between that account and whatever your subjective perception—your feeling—of red is? I’m confused as to what an ‘explanation’ for the mysterious redness of red would even look like.
This is a useful heuristic, but if anything it seems to dissolve the initial question of “Where’s the qualia?” As DanArmak and RobinZ channeling Dennet point out elsewhere in the thread, questions about qualia don’t appear to be answerable.
What I think Mitchell is looking for (an he can correct me if I’m wrong) as an explanation of experience is some model that describes the elements necessary for experience and how they interact in some quantitative way. For example, let’s pretend that flesh brains are not the only modules capable of experience, and that we can build experiences out of other materials. A theory of experience would help to answer: what materials can be used, what processing speeds are acceptable (ie, can experience exist in stasis), what cpus/processors/algorithms must be implemented, and what outputs will convince us that experience is taking place (vs creating a Chinese letter box). Now, I don’t think we will have any way of answering these questions before uploading/AI, but I can conceive of ways of testing many variables in experience once a mind has been uploaded. We could change one variable- ask the subject to describe the change- change it back and ask the subject what his memory of the experience is, etc,etc. We can run simulations that are deliberately missing normal algorithms until we find which pieces of a mind are the bare bone essentials of experience. To me this is just another question for the neuroscientists and information theorists, once our technology is advanced enough to actually experiment on it. It is only a ‘problem’ if you believe p-zombies are possible, and that we might create entities that describe experience without having it.
As a reductionist who disagrees with your overall critique of reductionism, I have to say that you hit the nail on the head here. Some self-styled reductionists do seem prone to “explaining” subjective experience by saying that it’s nothing more than what certain algorithms feel like from the inside. As you say, that’s really a dualist account if you leave it there.
My problem is I don’t see how you can avoid a “that’s how an algorithm feels from the inside” explanation somewhere down the line. Even if you create some theory that purports to account for the (say) mysterious redness of red, isn’t there still a gap to bridge between that account and whatever your subjective perception—your feeling—of red is? I’m confused as to what an ‘explanation’ for the mysterious redness of red would even look like.
If you can’t even imagine what an answer would look like, you should doubt that you’ve successfully asked a question.
That’s not supposed to be a conversation-stopper. It’s just that the first step in the conversation should be to make the question clear.
This is a useful heuristic, but if anything it seems to dissolve the initial question of “Where’s the qualia?” As DanArmak and RobinZ channeling Dennet point out elsewhere in the thread, questions about qualia don’t appear to be answerable.
What I think Mitchell is looking for (an he can correct me if I’m wrong) as an explanation of experience is some model that describes the elements necessary for experience and how they interact in some quantitative way. For example, let’s pretend that flesh brains are not the only modules capable of experience, and that we can build experiences out of other materials. A theory of experience would help to answer: what materials can be used, what processing speeds are acceptable (ie, can experience exist in stasis), what cpus/processors/algorithms must be implemented, and what outputs will convince us that experience is taking place (vs creating a Chinese letter box). Now, I don’t think we will have any way of answering these questions before uploading/AI, but I can conceive of ways of testing many variables in experience once a mind has been uploaded. We could change one variable- ask the subject to describe the change- change it back and ask the subject what his memory of the experience is, etc,etc. We can run simulations that are deliberately missing normal algorithms until we find which pieces of a mind are the bare bone essentials of experience. To me this is just another question for the neuroscientists and information theorists, once our technology is advanced enough to actually experiment on it. It is only a ‘problem’ if you believe p-zombies are possible, and that we might create entities that describe experience without having it.
Traceback: