I don’t know why we even talk about stuff like this. If you can simulate and run a bat brain, then bat conciousness is reductionist. If you can simulate and run a human brain, then human conciousness is reductionist.
To the best of our understanding, it takes a change to the laws of physics to make these things unsimulatable. Therefore, the prior for reductionism is high enough that you need actual evidence to counter it, not just some philosopher’s rambling argument.
It may also help to taboo the word ‘conciousness’ in these kinds of discussions. I find that the word brings a lot of questionable baggage to the table in addition to failing to describe what is actually going on.
There is a difference between “the brain runs on physics” and having a reductive explanation of conscious experience. Running a simulation of a human brain and getting human behaviour out can be an explanation of behaviour, but not of experience. In order to explain experience, we should have the simulation provide experience as an output and verify that the predicted experience matches the actual one.
Do be concrete, if we had a properly worked out explanation of experience, I would want it to answer questions like “when someone says that jazz music is physically painful, what sensation are they referring to? Is there some different sound you could play which would produce the same sensation for me? If not, why not?”
I think the most interesting point in Nagel’s paper is that in order to even be able to check whether a reductionist theory is making the right predictions or not, we will need to develop some skills in being aware of and precisely describe our subjective experience. (He also makes a separate claim, that we will need to be less confused about what subjective experience even means. But I think starting at the practical skills end of the problem is more promising).
There is a difference in, eg, going skydiving and watching a detailed brain simulation of someone going skydiving (I would pay different amounts for each of these, and so would most people). This remains true with reductionism. Therefore there is a difference between genuinely experiencing something, and seeing an abstract experience happening—and we can tell the difference. This seems to make questions of the type “is X experiencing Z” meaningful. Potentially even if Z is “being a bat”.
According to a functionalist definition of consciousness, you experience like a bat if you behave like a bat. That’s essentially the “strong” Turing test view. According to a structuralist definition, you experience like a bat if you share the same type of brain states. That’s Searle’s view. According to eliminativism, consciousness and subjective experience are folk psychology concepts with no scientific utility, similar to sky gods causing lightning. That’s the radical behaviorist view.
And I’m arguing that there are ways of seeing evidence for “X knows what it is like to be Z” that are different from the oned above.
Suppose we have a transexual (female to male), who writes a book “what it’s like to be a man—unexpected insights for women from one who used to be one of them.” It’s full of descriptions of facts about being a man that a) almost all men think are true, and b) almost all woman find surprising when they read about them.
Then some gal comes along and say “I’ve been confined to an all female colony for all my life, but I’ve chatted with many men online, and I think I really know what it’s like to be a man.” She them proceeds to name a lot of facts that are indeed generally true about men (and a lot of them were in the transexual’s book). We look at her chat logs, and none of these facts were mentioned.
Then we’d be justified in saying that she really understood what it’s like to be a man. If, however, we knew that she’d read the transexual’s book, we’d be justified in rejecting that interpretation. So this is a “weak” Turing test view, along the lines of “if X passes the Turing test, and X was not trained specifically to pass the Turing test, then...”
Then we’d be justified in saying that she really understood what it’s like to be a man. If, however, we knew that she’d read the transexual’s book, we’d be justified in rejecting that interpretation.
Except that in both cases she actually knows, in an epistemic sense, what it is like to be a man. The only difference is that she may have never experienced certain mental states that are unique to men. So what?
The difference is that you can’t read a book from a human who used to be a bat. But if you could, (e.g. it was a vampire), or you were some super-neuroscientists who did very accurate studies on the bat brain, you could, in principle, know what it is like to be a bat, in an epistemic sense.
In my model, the woman was deducing epistemic facts about men, and the most likely explanation was that she was generalising from the knowledge she had to construct a subjective experience that mirrored that of a man (rather than reading them in a book and copying it). This explanation has testable differences from getting the explanations from a book, eg whether she will answer correctly in “this male faces [unknown new situation]; what do they do?”
Sure, mental states are physical configurations of the brain, which is a piece of matter, so the question of whether a certain piece of matter in the universe is or was in a certain physical configuration is in principle amenable to scientific enquiry.
My question is, what is the point? I mean, in some circumstances it may certainly useful to determine whether somebody is lying or telling the truth, but in general, if somebody beliefs are epistemically correct, does it matter what specific subjective experiences are associated to them?
I don’t know why we even talk about stuff like this. If you can simulate and run a bat brain, then bat conciousness is reductionist. If you can simulate and run a human brain, then human conciousness is reductionist.
To the best of our understanding, it takes a change to the laws of physics to make these things unsimulatable. Therefore, the prior for reductionism is high enough that you need actual evidence to counter it, not just some philosopher’s rambling argument.
It may also help to taboo the word ‘conciousness’ in these kinds of discussions. I find that the word brings a lot of questionable baggage to the table in addition to failing to describe what is actually going on.
I have a photograph of a bridge to sell you.
There is a difference between “the brain runs on physics” and having a reductive explanation of conscious experience. Running a simulation of a human brain and getting human behaviour out can be an explanation of behaviour, but not of experience. In order to explain experience, we should have the simulation provide experience as an output and verify that the predicted experience matches the actual one.
Do be concrete, if we had a properly worked out explanation of experience, I would want it to answer questions like “when someone says that jazz music is physically painful, what sensation are they referring to? Is there some different sound you could play which would produce the same sensation for me? If not, why not?”
I think the most interesting point in Nagel’s paper is that in order to even be able to check whether a reductionist theory is making the right predictions or not, we will need to develop some skills in being aware of and precisely describe our subjective experience. (He also makes a separate claim, that we will need to be less confused about what subjective experience even means. But I think starting at the practical skills end of the problem is more promising).
There is a difference in, eg, going skydiving and watching a detailed brain simulation of someone going skydiving (I would pay different amounts for each of these, and so would most people). This remains true with reductionism. Therefore there is a difference between genuinely experiencing something, and seeing an abstract experience happening—and we can tell the difference. This seems to make questions of the type “is X experiencing Z” meaningful. Potentially even if Z is “being a bat”.
Basically, we are arguing semantics.
According to a functionalist definition of consciousness, you experience like a bat if you behave like a bat. That’s essentially the “strong” Turing test view.
According to a structuralist definition, you experience like a bat if you share the same type of brain states. That’s Searle’s view.
According to eliminativism, consciousness and subjective experience are folk psychology concepts with no scientific utility, similar to sky gods causing lightning. That’s the radical behaviorist view.
And I’m arguing that there are ways of seeing evidence for “X knows what it is like to be Z” that are different from the oned above.
Suppose we have a transexual (female to male), who writes a book “what it’s like to be a man—unexpected insights for women from one who used to be one of them.” It’s full of descriptions of facts about being a man that a) almost all men think are true, and b) almost all woman find surprising when they read about them.
Then some gal comes along and say “I’ve been confined to an all female colony for all my life, but I’ve chatted with many men online, and I think I really know what it’s like to be a man.” She them proceeds to name a lot of facts that are indeed generally true about men (and a lot of them were in the transexual’s book). We look at her chat logs, and none of these facts were mentioned.
Then we’d be justified in saying that she really understood what it’s like to be a man. If, however, we knew that she’d read the transexual’s book, we’d be justified in rejecting that interpretation. So this is a “weak” Turing test view, along the lines of “if X passes the Turing test, and X was not trained specifically to pass the Turing test, then...”
Except that in both cases she actually knows, in an epistemic sense, what it is like to be a man. The only difference is that she may have never experienced certain mental states that are unique to men. So what?
Really? How do we know that? What makes female-male a difference of kind to human-bat, rather than a question of degree?
The difference is that you can’t read a book from a human who used to be a bat. But if you could, (e.g. it was a vampire), or you were some super-neuroscientists who did very accurate studies on the bat brain, you could, in principle, know what it is like to be a bat, in an epistemic sense.
In my model, the woman was deducing epistemic facts about men, and the most likely explanation was that she was generalising from the knowledge she had to construct a subjective experience that mirrored that of a man (rather than reading them in a book and copying it). This explanation has testable differences from getting the explanations from a book, eg whether she will answer correctly in “this male faces [unknown new situation]; what do they do?”
Sure, mental states are physical configurations of the brain, which is a piece of matter, so the question of whether a certain piece of matter in the universe is or was in a certain physical configuration is in principle amenable to scientific enquiry.
My question is, what is the point? I mean, in some circumstances it may certainly useful to determine whether somebody is lying or telling the truth, but in general, if somebody beliefs are epistemically correct, does it matter what specific subjective experiences are associated to them?
Agreed, but I bet some people would pay more for the real thing, and others would pay more for the simulation.
Which is all I need to argue they are indeed different things, in relevant ways ^_^