If functionalism is true then dualism is true. You have the same experience E hovering over the different physical situations A, B, and C, even when they are as materially diverse as neurons, transistors, and someone in a Chinese room.
It should already be obvious that an arrangement of atoms in space is not identical to any particular experience you may claim to somehow be inhabiting it, and so it should already be obvious that the standard materialistic approach to consciousness is actually property dualism. But perhaps the observation that the experience is supposed to be exactly the same, even when the arrangement of atoms is really different, will help a few people to grasp this.
Perhaps one can construe functionalism as a form of dualism, but if so then it’s a curious state of affairs because then one can be a ‘dualist’ while still giving typically materialist verdicts on all the familiar questions and thought experiments in the philosophy of mind:
Artificial intelligence is possible and the ‘systems reply’ to the Chinese Room thought experiment is substantially correct.
“Zombies” are impossible (even a priori).
Libertarian free will is incoherent, or at any rate false.
There is no ‘hard problem of consciousness’ qualitatively distinct from the ‘easy problems’ of figuring out how the brain’s structure and functional organization are able to support the various cognitive competences we observe in human behaviour.
[This isn’t part of what “functionalism” is usually taken to mean, but it’s hard to see how a thoroughgoing functionalist could avoid it:] There aren’t always ‘facts of the matter’ about persisting subjective identity. For instance, in “cloning and teleportation” thought experiments the question of whether my mind ceases to exist or is ‘transferred’ to another body, and if so, which body, turns out to be meaningless.
[As above:] There isn’t always a ‘fact of the matter’ as to whether a being (e.g. a developing foetus) is conscious.
If you guys are prepared to concede all of these and similar bones of contention, I don’t think we’d have anything further to argue about—we can all proudly proclaim ourselves dualists, lament the sterile emptiness of the reductionist vision of the world into whose thrall so many otherwise great thinkers have fallen, and sing life-affirming hymns to the richness and mystery of the mind.
Continuity: The idea that if you look at what’s going on in a developing brain (or, for that matter, a deteriorating brain) there are no—or at least there may not be any—sudden step changes in the patterns of neural activity on which the supposed mental state supervenes.
Or again, one can make the same point about the evolutionary tree. If you consider all of the animal brains there are and ever have been, there won’t be any single criterion, even at the level of ‘functional organisation’, which distinguishes conscious brains from unconscious ones.
This is partly an empirical thesis, insofar as we can actually look and see whether there are such ‘step changes’ in ontogeny and phylogeny. It’s only partly empirical because even if there were, we couldn’t verify that those changes were precisely the ones that signified consciousness.
But surely, if we take functionalism seriously then the lack of any plausible candidates for a discrete “on-off” functional property to coincide with consciousness suggests that consciousness itself is not a discrete “on-off” property.
Doesn’t this argument apply to everything else about consciousness as well—whether a particular brain is thinking something, planning something, experiencing something? According to functionalism, being in any specific conscious state should be a matter of your brain possessing some specific causal/functional property. Are you saying that no such properties are ever definitely and absolutely possessed? Because that would seem to imply that no-one is ever definitely in any specific conscious state—i.e. that there are no facts about consciousness at all.
I think ciphergoth is correct to mention the Sorites paradox.
It always surprises me when people refuse to swallow this idea that “sometimes there’s no fact of the matter as to whether something is conscious”.
However difficult it is to imagine how it can be true, it’s just blindingly obvious that our bodies and minds are built up continuously, without any magic moment when ‘the lights switch on’.
If you take the view that, in addition to physical reality, there is a “bank of screens” somewhere (like in the film Aliens) showing everyone’s points of view then you’ll forever be stuck with the discrete fact that either there is a screen allocated to this particular animal or there isn’t. But surely the correct response here is simply to dispense with the idea of a “bank of screens”.
We need to understand that consciousness behaves as it does irrespectively of our naive preconceptions, rather than trying to make it analytically true that consciousness conforms to our naive preconceptions and using that to refute materialism.
The possibility of exact description of states on both sides [conscious subjectivity, physical brain], and of exactly specifying the mapping between them, must exist in any viable theory of consciousness. Otherwise, it reifies uncertainty in a way that has the same fundamental illogicality as the “particle without a definite position”.
So the only way I can countenance the idea
sometimes there’s no fact of the matter as to whether something is conscious
is if this arises because of vagueness in our description of consciousness from within. Some things not only exist but “have an inside” (for example, us); some things, one usually supposes, “just exist” (for example, a rock); and perhaps there are intermediate states between having an inside and not having an inside that we don’t understand well, or don’t understand at all. This would mean that our first-person concept of the difference between conscious and non-conscious was deficient, that it only approximated reality.
But I don’t see any sensitivity to that issue in what you write. Your arguments are coming entirely from the third-person, physical description, the view from outside. You think there’s a continuum of states between some that are definitely conscious, and some that are definitely not conscious, and so you conclude that there’s no sharp boundary between conscious and non-conscious. The first-person description features solely as an idea of a “screen” that we can just “dispense with”. Dude, the first-person description describes the life you actually live, and the only reality you ever directly experience!
What would happen if you were to personally pass from a conscious to a non-conscious state? To deny that there’s a boundary is to say that there’s no fact about what happens to you in that scenario, except that at the start you’re conscious, and at the end you’re not, and we can’t or won’t think or say anything very precise about what happens in between—unless it’s expressed in terms of neurons and atoms and other safely non-subjective entities, which is missing the point. The loss of consciousness, whether in sleep or in death, is a phenomenon on the first-person side of this divide, which explores and crosses the boundary between conscious and non-conscious. It’s a thing that happens to you, to the subject of your experience, and not just to certain not-you objects contemplated by that subject in the third-person, objectifying mode of its experience.
You know, there’s not even any profound physical reason to support the argument from continuity. The physical world is full of qualitative transitions.
it’s just blindingly obvious that our bodies and minds are built up continuously, without any magic moment when ‘the lights switch on’.
Couldn’t you make the same argument about literally switching on a light? :-) Obviously the idea that a light is sometimes on and sometimes off is a naive preconception that we should dispense with.
Couldn’t you make the same argument about literally switching on a light? :-) Obviously the idea that a light is sometimes on and sometimes off is a naive preconception that we should dispense with.
Correct—the impression that it is an instantaneous, discontinuous process is an illusion caused by the speed of the transition compared to the speed of our perceptions.
Yeah, but I think “mental discretists” can tolerate that kind of very-rapid-but-still-continuous physical change—they just have to say that a mental moment corresponds to (its properties correlate with those of) a smallish patch of spacetime.
I mean, if you believe in unified “mental moments” at all then you’ve got to believe something like that, just because the brain occupies a macroscopic region of space, and because of the finite speed of light.
But this defense becomes manifestly absurd if we can draw out the grey area sufficiently far (e.g. over the entire lifetime of some not-quite-conscious animal.)
perhaps there are intermediate states between having an inside and not having an inside that we don’t understand well, or don’t understand at all. This would mean that our first-person concept of the difference between conscious and non-conscious was deficient, that it only approximated reality.
Well then I’m not sure that we disagree substantively on this issue.
Basically, I’ve said: “Naive discrete view of consciousness --> Not always determinate whether something is conscious”. (Or rather that’s what I’ve meant to say but tended to omit the premise.)
Whereas I think you’re saying something like: “At the level of metaphysical reality, there is no such thing as indeterminacy (apparent indeterminacy only arises through vague or otherwise inadequate language) --> Whatever the true nature of subjective experience, the facts about it must be determinate”
Clearly these two views are compatible with one another (as long as I state my premise). (However, there’s room to agree with the letter but not the spirit of your view, by taking ‘the true nature of subjective experience’ to be something ridiculously far away from what we usually think it is and holding that all mentalistic language (as we know it) is irretrievably vague.)
You know, there’s not even any profound physical reason to support the argument from continuity. The physical world is full of qualitative transitions.
I’m not sure exactly what you’re thinking of here, but I seem to recall that you’re sympathetic to the idea that physics is important in the philosophy of mind. Anyway, I think the idea that a tiny ‘quantum leap’ could make the difference between a person being (determinately) consciousness and (determinately) unconsciousness is an obvious non-starter.
Couldn’t you make the same argument about literally switching on a light? :-) Obviously the idea that a light is sometimes on and sometimes off is a naive preconception that we should dispense with.
Well, this is where we actually need to look at the empirical data and see whether a foetus seems to ‘switch on’ like a light at any point. I’ve assumed there is no such point, but what I know about embryology could be written on the back of a postage stamp. (But come on, the idea is ridiculous and I see no reason to disingenuously pretend to be agnostic about it.)
Maybe you’re familiar with the phenomenon of “waking up”. Do you agree that this is a real thing? If so, does it not imply that it once happened to you for the first time?
Whatever the true nature of subjective experience, the facts about it must be determinate
I agree with that.
there’s room to agree with the letter but not the spirit of your view, by taking ‘the true nature of subjective experience’ to be something ridiculously far away from what we usually think it is and holding that all mentalistic language (as we know it) is irretrievably vague.
What do you think you are doing when you use mentalistic language, then? Do you think it bears no relationship to reality?
A little group of neurons in the brain stem starts sending a train of signals to the base of the thalamus. The thalamus ‘wakes up’ and then sends signals to the cortex and the cortex ‘wakes up’. Consciousness is now ‘on’. Later, the brain stem stops sending the train of signals, the thalamus ‘goes to sleep’ and the cortex slowly winds down the ‘goes to sleep’. Consciousness is now ‘off’. Neither on or off was instantaneous or sharply defined. (Dreaming activated the cortex differently at times during sleep but ignore that for now). Descriptions like this (hopefully more detailed and accurate) are the ‘facts of the matter’ not semantic arguments. Why is it that science is OK for understanding physics and astronomy but not for understanding consciousness?
Why is it that science is OK for understanding physics and astronomy but not for understanding consciousness?
Science in some broad sense “is OK… for understanding consciousness”, but unless you’re a behaviorist, you need to be explaining (and first, you need to be describing) the subjective side of consciousness, not just the physiology of it. It’s the facts about subjectivity which make consciousness a different sort of topic from anything in the natural sciences.
Yes we will have to describe the subjective side of consciousness but the physiology has to come first. As an illustration: if you didn’t know the function of the heart or much about its physiology, it would be useless to try and understand it by how it felt. Hence we would have ideas like ‘loving with all my heart’, ‘my heart is not in it’ etc. which come from the pre-biology world. Once we know how and why the heart works the way it does, those feeling are seen differently.
I am certainly not a behaviorist and I do think that consciousness is an extremely important function of the brain/mind. We probably can’t understand how cognition works without understanding how consciousness works. I just do not think introspection gets us closer to understanding, nor do I think that introspection gives us any direct knowledge of our own minds - ‘direct’ being the important word.
Maybe you’re familiar with the phenomenon of “waking up”. Do you agree that this is a real thing? If so, does it not imply that it once happened to you for the first time?
Right, people wake up and go to sleep. Waking can be relatively quicker or slower depending on the manner of awakening, but… I’m not sure what you think this establishes.
In any case, a sleeping person is not straightforwardly ‘unconscious’ - their mind hasn’t “disappeared” it’s just doing something very different from what it’s doing when it’s awake. A better example would be someone ‘coming round’ from a spell of unconsciousness, and here I think you’ll find that people remember it being a gradual process.
Your whole line of attack here is odd: all that matters for the wider debate is whether or not there are any smooth, gradual processes between consciousness and unconsciousness, not whether or not there also exist rapid-ish transitions between the two.
What do you think you are doing when you use mentalistic language, then? Do you think it bears no relationship to reality?
There are plenty of instances where language is used in a way where its vagueness cannot possibly be eliminated, and yet manages to be meaningful. E.g. “The Battle Of Britain was won primarily because the Luftwaffe switched the focus of their efforts from knocking out the RAF to bombing major cities.” (N.B. I’m not claiming this is true (though it may be) simply that it “bears some relationship to reality”.)
Your whole line of attack here is odd: all that matters for the wider debate is whether or not there are any smooth, gradual processes between consciousness and unconsciousness, not whether or not there also exist rapid-ish transitions between the two.
I am objecting, first of all, to your assertion that the idea that a fetus might “‘switch on’ like a light” at some point in its development is “ridiculous”. Waking up was supposed to be an example of a rapid change, as well as something real and distinctive which must happen for a first time in the life of an organism. But I can make this counterargument even just from the physiological perspective. Sharp transitions do occur in embryonic development, e.g. when the morphogenetic motion of tissues and cavities produces a topological change in the organism. If we are going to associate the presence of a mind, or the presence of a capacity for consciousness, with the existence of a particular functional organization in the brain, how can there not be a first moment when that organization exists? It could consist in something as simple as the first synaptic coupling of two previously separate neural systems. Before the first synapses joining them, certain computations were not possible; after the synapses had formed, they were possible.
As for the significance of “smooth, gradual” transitions between consciousness and unconsciousness, I will revert to that principle which you expressed thus:
“Whatever the true nature of subjective experience, the facts about it must be determinate”
Among the facts about subjective experience are its relationship to “non-subjective” states or forms of existence. Those facts must also be determinate. The transition from consciousness to non-consciousness, if it is a continuum, cannot only be a continuum on the physical/physiological side. It must also be a continuum on the subjective side, even though one end of the continuum is absence of subjectivity. When you say there can be material systems for which there is no fact about its being conscious—it’s not conscious, it’s not not-conscious—you are being just as illogical as the people who believe in “the particle without a definite position”.
I ask myself why you would even think like this. Why wouldn’t you suppose instead that folk psychology can be conceptually refined to the point of being exactly correct? Why the willingness to throw it away, in favor of nothing?
Maybe I’m missing something, but I can’t see in what way this argument is specifically about consciousness, rather than just being a re-hash of the Sorites Paradox—could you spell it out for me?
If we were just talking about names this wouldn’t matter, but we are talking about explanations. Vagueness in a name just means that the applicability of the name is a little undetermined. But there is no such thing as objective vagueness. The objective properties of things are “exact”, even when we can only specify them vaguely.
This is what we all object to in the Copenhagen interpretation of quantum mechanics, right? It makes no sense to say that a particle has a position, if it doesn’t have a definite position. Either it has a definite position, or the concept of position just doesn’t apply. There’s no problem in saying that the position is uncertain, or in specifying it only approximately; it’s the reification of uncertainty—the particle is somewhere, but not anywhere in particular—which is nonsense. Either it’s somewhere particular (or even everywhere, if you’re a many-worlder), or it’s nowhere.
Neil flirts with reifying vagueness about consciousness in a similarly untenable fashion. We can be vague about how we describe a subjective state of consciousness, we can be vague about how we describe the physical brain. But we cannot identify an exact property of a conscious state with an inherently vague physical predicate. The possibility of exact description of states on both sides, and of exactly specifying the mapping between them, must exist in any viable theory of consciousness. Otherwise, it reifies uncertainty in a way that has the same fundamental illogicality as the “particle without a definite position”.
By the way, if you haven’t read Dennett’s “Real Patterns” then I can recommend it as an excellent explanation of how fuzzily defined, ‘not-always-a-fact-of-the-matter-whether-they’re-present’ patterns, of which folk-psychological states like beliefs and desires are just a special case, can meaningfully find a place in a physicalist universe.
There’s an aspect of this which I haven’t yet mentioned, which is the following:
We can imagine different strains of functionalism. The weakest would just be: “A person’s mental state supervenes on their (multiply realizable) ‘functional state’.” This leaves the nature of the relation between functional state and mental state utterly mysterious, and thereby leaves the ‘hard problem’ looking as ‘hard’ as it ever did.
But I think a ‘thoroughgoing functionalist’ wants to go further, and say that a person’s mental state is somehow constituted by (or reduces to) the functional state of their brain. It’s not a trivial project to flesh out this idea—not simply to clarify what it means, but to begin to sketch out the functional properties that constitute consciousness—but it’s one that various thinkers (like Minsky and Dennett) have actually taken up.
And if one ends up hypothesising that what’s important for whether a system is ‘conscious’ is (say) whether it represents information a certain way, has a certain kind of ‘higher-order’ access to its own state, or whatever—functional properties which can be scaled up and down in scope and complexity without any obvious ‘thresholds’ being encountered that might correspond to the appearance of consciousness—then one has grounds for saying that there isn’t always a ‘fact of the matter’ as to whether a being is conscious.
I think a ‘thoroughgoing functionalist’ wants to go further, and say that a person’s mental state is somehow constituted by (or reduces to) the functional state of their brain.
Then it’s time to return to the rest of your comment—the whole discussion so far has just been about that one claim, that something can be neither conscious nor not-conscious. So now I’ll quote myself:
The property dualism I’m talking about occurs when basic sensory qualities like color are identified with such computational properties. Either you end up saying “seeing the color is how it feels”—and “feeling” is the extra, dual property—or you say there’s no “feeling” at all—which is denial that consciousness exists. It would be better to be able to assert identity, but then the elements of a conscious experience can’t really be coarse-grained states of neuronal ensembles, etc—that would restore the dualism.
It would be better to be able to assert identity, but then the elements of a conscious experience can’t really be coarse-grained states of neuronal ensembles, etc—that would restore the dualism.
By “coarse-grained states” do you mean that, say, “pain” stands to the many particular neuronal ensembles that could embody pain, in something like the way “human being” stands to all the actual individual human beings? How would that restore a dualism, and what kind of dualism is that?
If functionalism is true then dualism is true. You have the same experience E hovering over the different physical situations A, B, and C, even when they are as materially diverse as neurons, transistors, and someone in a Chinese room.
It should already be obvious that an arrangement of atoms in space is not identical to any particular experience you may claim to somehow be inhabiting it, and so it should already be obvious that the standard materialistic approach to consciousness is actually property dualism. But perhaps the observation that the experience is supposed to be exactly the same, even when the arrangement of atoms is really different, will help a few people to grasp this.
Perhaps one can construe functionalism as a form of dualism, but if so then it’s a curious state of affairs because then one can be a ‘dualist’ while still giving typically materialist verdicts on all the familiar questions and thought experiments in the philosophy of mind:
Artificial intelligence is possible and the ‘systems reply’ to the Chinese Room thought experiment is substantially correct.
“Zombies” are impossible (even a priori).
Libertarian free will is incoherent, or at any rate false.
There is no ‘hard problem of consciousness’ qualitatively distinct from the ‘easy problems’ of figuring out how the brain’s structure and functional organization are able to support the various cognitive competences we observe in human behaviour.
[This isn’t part of what “functionalism” is usually taken to mean, but it’s hard to see how a thoroughgoing functionalist could avoid it:] There aren’t always ‘facts of the matter’ about persisting subjective identity. For instance, in “cloning and teleportation” thought experiments the question of whether my mind ceases to exist or is ‘transferred’ to another body, and if so, which body, turns out to be meaningless.
[As above:] There isn’t always a ‘fact of the matter’ as to whether a being (e.g. a developing foetus) is conscious.
If you guys are prepared to concede all of these and similar bones of contention, I don’t think we’d have anything further to argue about—we can all proudly proclaim ourselves dualists, lament the sterile emptiness of the reductionist vision of the world into whose thrall so many otherwise great thinkers have fallen, and sing life-affirming hymns to the richness and mystery of the mind.
How do you get that from functionalism?
Continuity: The idea that if you look at what’s going on in a developing brain (or, for that matter, a deteriorating brain) there are no—or at least there may not be any—sudden step changes in the patterns of neural activity on which the supposed mental state supervenes.
Or again, one can make the same point about the evolutionary tree. If you consider all of the animal brains there are and ever have been, there won’t be any single criterion, even at the level of ‘functional organisation’, which distinguishes conscious brains from unconscious ones.
This is partly an empirical thesis, insofar as we can actually look and see whether there are such ‘step changes’ in ontogeny and phylogeny. It’s only partly empirical because even if there were, we couldn’t verify that those changes were precisely the ones that signified consciousness.
But surely, if we take functionalism seriously then the lack of any plausible candidates for a discrete “on-off” functional property to coincide with consciousness suggests that consciousness itself is not a discrete “on-off” property.
Doesn’t this argument apply to everything else about consciousness as well—whether a particular brain is thinking something, planning something, experiencing something? According to functionalism, being in any specific conscious state should be a matter of your brain possessing some specific causal/functional property. Are you saying that no such properties are ever definitely and absolutely possessed? Because that would seem to imply that no-one is ever definitely in any specific conscious state—i.e. that there are no facts about consciousness at all.
I think ciphergoth is correct to mention the Sorites paradox.
It always surprises me when people refuse to swallow this idea that “sometimes there’s no fact of the matter as to whether something is conscious”.
However difficult it is to imagine how it can be true, it’s just blindingly obvious that our bodies and minds are built up continuously, without any magic moment when ‘the lights switch on’.
If you take the view that, in addition to physical reality, there is a “bank of screens” somewhere (like in the film Aliens) showing everyone’s points of view then you’ll forever be stuck with the discrete fact that either there is a screen allocated to this particular animal or there isn’t. But surely the correct response here is simply to dispense with the idea of a “bank of screens”.
We need to understand that consciousness behaves as it does irrespectively of our naive preconceptions, rather than trying to make it analytically true that consciousness conforms to our naive preconceptions and using that to refute materialism.
I’ll stick with the principle
So the only way I can countenance the idea
is if this arises because of vagueness in our description of consciousness from within. Some things not only exist but “have an inside” (for example, us); some things, one usually supposes, “just exist” (for example, a rock); and perhaps there are intermediate states between having an inside and not having an inside that we don’t understand well, or don’t understand at all. This would mean that our first-person concept of the difference between conscious and non-conscious was deficient, that it only approximated reality.
But I don’t see any sensitivity to that issue in what you write. Your arguments are coming entirely from the third-person, physical description, the view from outside. You think there’s a continuum of states between some that are definitely conscious, and some that are definitely not conscious, and so you conclude that there’s no sharp boundary between conscious and non-conscious. The first-person description features solely as an idea of a “screen” that we can just “dispense with”. Dude, the first-person description describes the life you actually live, and the only reality you ever directly experience!
What would happen if you were to personally pass from a conscious to a non-conscious state? To deny that there’s a boundary is to say that there’s no fact about what happens to you in that scenario, except that at the start you’re conscious, and at the end you’re not, and we can’t or won’t think or say anything very precise about what happens in between—unless it’s expressed in terms of neurons and atoms and other safely non-subjective entities, which is missing the point. The loss of consciousness, whether in sleep or in death, is a phenomenon on the first-person side of this divide, which explores and crosses the boundary between conscious and non-conscious. It’s a thing that happens to you, to the subject of your experience, and not just to certain not-you objects contemplated by that subject in the third-person, objectifying mode of its experience.
You know, there’s not even any profound physical reason to support the argument from continuity. The physical world is full of qualitative transitions.
Couldn’t you make the same argument about literally switching on a light? :-) Obviously the idea that a light is sometimes on and sometimes off is a naive preconception that we should dispense with.
Correct—the impression that it is an instantaneous, discontinuous process is an illusion caused by the speed of the transition compared to the speed of our perceptions.
Yeah, but I think “mental discretists” can tolerate that kind of very-rapid-but-still-continuous physical change—they just have to say that a mental moment corresponds to (its properties correlate with those of) a smallish patch of spacetime.
I mean, if you believe in unified “mental moments” at all then you’ve got to believe something like that, just because the brain occupies a macroscopic region of space, and because of the finite speed of light.
But this defense becomes manifestly absurd if we can draw out the grey area sufficiently far (e.g. over the entire lifetime of some not-quite-conscious animal.)
That, and the stability of the states on either side.
Well then I’m not sure that we disagree substantively on this issue.
Basically, I’ve said: “Naive discrete view of consciousness --> Not always determinate whether something is conscious”. (Or rather that’s what I’ve meant to say but tended to omit the premise.)
Whereas I think you’re saying something like: “At the level of metaphysical reality, there is no such thing as indeterminacy (apparent indeterminacy only arises through vague or otherwise inadequate language) --> Whatever the true nature of subjective experience, the facts about it must be determinate”
Clearly these two views are compatible with one another (as long as I state my premise). (However, there’s room to agree with the letter but not the spirit of your view, by taking ‘the true nature of subjective experience’ to be something ridiculously far away from what we usually think it is and holding that all mentalistic language (as we know it) is irretrievably vague.)
I’m not sure exactly what you’re thinking of here, but I seem to recall that you’re sympathetic to the idea that physics is important in the philosophy of mind. Anyway, I think the idea that a tiny ‘quantum leap’ could make the difference between a person being (determinately) consciousness and (determinately) unconsciousness is an obvious non-starter.
Well, this is where we actually need to look at the empirical data and see whether a foetus seems to ‘switch on’ like a light at any point. I’ve assumed there is no such point, but what I know about embryology could be written on the back of a postage stamp. (But come on, the idea is ridiculous and I see no reason to disingenuously pretend to be agnostic about it.)
Maybe you’re familiar with the phenomenon of “waking up”. Do you agree that this is a real thing? If so, does it not imply that it once happened to you for the first time?
I agree with that.
What do you think you are doing when you use mentalistic language, then? Do you think it bears no relationship to reality?
A little group of neurons in the brain stem starts sending a train of signals to the base of the thalamus. The thalamus ‘wakes up’ and then sends signals to the cortex and the cortex ‘wakes up’. Consciousness is now ‘on’. Later, the brain stem stops sending the train of signals, the thalamus ‘goes to sleep’ and the cortex slowly winds down the ‘goes to sleep’. Consciousness is now ‘off’. Neither on or off was instantaneous or sharply defined. (Dreaming activated the cortex differently at times during sleep but ignore that for now). Descriptions like this (hopefully more detailed and accurate) are the ‘facts of the matter’ not semantic arguments. Why is it that science is OK for understanding physics and astronomy but not for understanding consciousness?
Science in some broad sense “is OK… for understanding consciousness”, but unless you’re a behaviorist, you need to be explaining (and first, you need to be describing) the subjective side of consciousness, not just the physiology of it. It’s the facts about subjectivity which make consciousness a different sort of topic from anything in the natural sciences.
Yes we will have to describe the subjective side of consciousness but the physiology has to come first. As an illustration: if you didn’t know the function of the heart or much about its physiology, it would be useless to try and understand it by how it felt. Hence we would have ideas like ‘loving with all my heart’, ‘my heart is not in it’ etc. which come from the pre-biology world. Once we know how and why the heart works the way it does, those feeling are seen differently.
I am certainly not a behaviorist and I do think that consciousness is an extremely important function of the brain/mind. We probably can’t understand how cognition works without understanding how consciousness works. I just do not think introspection gets us closer to understanding, nor do I think that introspection gives us any direct knowledge of our own minds - ‘direct’ being the important word.
Right, people wake up and go to sleep. Waking can be relatively quicker or slower depending on the manner of awakening, but… I’m not sure what you think this establishes.
In any case, a sleeping person is not straightforwardly ‘unconscious’ - their mind hasn’t “disappeared” it’s just doing something very different from what it’s doing when it’s awake. A better example would be someone ‘coming round’ from a spell of unconsciousness, and here I think you’ll find that people remember it being a gradual process.
Your whole line of attack here is odd: all that matters for the wider debate is whether or not there are any smooth, gradual processes between consciousness and unconsciousness, not whether or not there also exist rapid-ish transitions between the two.
There are plenty of instances where language is used in a way where its vagueness cannot possibly be eliminated, and yet manages to be meaningful. E.g. “The Battle Of Britain was won primarily because the Luftwaffe switched the focus of their efforts from knocking out the RAF to bombing major cities.” (N.B. I’m not claiming this is true (though it may be) simply that it “bears some relationship to reality”.)
I am objecting, first of all, to your assertion that the idea that a fetus might “‘switch on’ like a light” at some point in its development is “ridiculous”. Waking up was supposed to be an example of a rapid change, as well as something real and distinctive which must happen for a first time in the life of an organism. But I can make this counterargument even just from the physiological perspective. Sharp transitions do occur in embryonic development, e.g. when the morphogenetic motion of tissues and cavities produces a topological change in the organism. If we are going to associate the presence of a mind, or the presence of a capacity for consciousness, with the existence of a particular functional organization in the brain, how can there not be a first moment when that organization exists? It could consist in something as simple as the first synaptic coupling of two previously separate neural systems. Before the first synapses joining them, certain computations were not possible; after the synapses had formed, they were possible.
As for the significance of “smooth, gradual” transitions between consciousness and unconsciousness, I will revert to that principle which you expressed thus:
“Whatever the true nature of subjective experience, the facts about it must be determinate”
Among the facts about subjective experience are its relationship to “non-subjective” states or forms of existence. Those facts must also be determinate. The transition from consciousness to non-consciousness, if it is a continuum, cannot only be a continuum on the physical/physiological side. It must also be a continuum on the subjective side, even though one end of the continuum is absence of subjectivity. When you say there can be material systems for which there is no fact about its being conscious—it’s not conscious, it’s not not-conscious—you are being just as illogical as the people who believe in “the particle without a definite position”.
I ask myself why you would even think like this. Why wouldn’t you suppose instead that folk psychology can be conceptually refined to the point of being exactly correct? Why the willingness to throw it away, in favor of nothing?
Sorites error: in your last sentence you leap from there being no discontinuities to there being no facts at all.
Neil is the one who says that sometimes, there are no facts. How do you get from no facts to facts without a discontinuity?
Maybe I’m missing something, but I can’t see in what way this argument is specifically about consciousness, rather than just being a re-hash of the Sorites Paradox—could you spell it out for me?
If we were just talking about names this wouldn’t matter, but we are talking about explanations. Vagueness in a name just means that the applicability of the name is a little undetermined. But there is no such thing as objective vagueness. The objective properties of things are “exact”, even when we can only specify them vaguely.
This is what we all object to in the Copenhagen interpretation of quantum mechanics, right? It makes no sense to say that a particle has a position, if it doesn’t have a definite position. Either it has a definite position, or the concept of position just doesn’t apply. There’s no problem in saying that the position is uncertain, or in specifying it only approximately; it’s the reification of uncertainty—the particle is somewhere, but not anywhere in particular—which is nonsense. Either it’s somewhere particular (or even everywhere, if you’re a many-worlder), or it’s nowhere.
Neil flirts with reifying vagueness about consciousness in a similarly untenable fashion. We can be vague about how we describe a subjective state of consciousness, we can be vague about how we describe the physical brain. But we cannot identify an exact property of a conscious state with an inherently vague physical predicate. The possibility of exact description of states on both sides, and of exactly specifying the mapping between them, must exist in any viable theory of consciousness. Otherwise, it reifies uncertainty in a way that has the same fundamental illogicality as the “particle without a definite position”.
By the way, if you haven’t read Dennett’s “Real Patterns” then I can recommend it as an excellent explanation of how fuzzily defined, ‘not-always-a-fact-of-the-matter-whether-they’re-present’ patterns, of which folk-psychological states like beliefs and desires are just a special case, can meaningfully find a place in a physicalist universe.
There’s an aspect of this which I haven’t yet mentioned, which is the following:
We can imagine different strains of functionalism. The weakest would just be: “A person’s mental state supervenes on their (multiply realizable) ‘functional state’.” This leaves the nature of the relation between functional state and mental state utterly mysterious, and thereby leaves the ‘hard problem’ looking as ‘hard’ as it ever did.
But I think a ‘thoroughgoing functionalist’ wants to go further, and say that a person’s mental state is somehow constituted by (or reduces to) the functional state of their brain. It’s not a trivial project to flesh out this idea—not simply to clarify what it means, but to begin to sketch out the functional properties that constitute consciousness—but it’s one that various thinkers (like Minsky and Dennett) have actually taken up.
And if one ends up hypothesising that what’s important for whether a system is ‘conscious’ is (say) whether it represents information a certain way, has a certain kind of ‘higher-order’ access to its own state, or whatever—functional properties which can be scaled up and down in scope and complexity without any obvious ‘thresholds’ being encountered that might correspond to the appearance of consciousness—then one has grounds for saying that there isn’t always a ‘fact of the matter’ as to whether a being is conscious.
Then it’s time to return to the rest of your comment—the whole discussion so far has just been about that one claim, that something can be neither conscious nor not-conscious. So now I’ll quote myself:
By “coarse-grained states” do you mean that, say, “pain” stands to the many particular neuronal ensembles that could embody pain, in something like the way “human being” stands to all the actual individual human beings? How would that restore a dualism, and what kind of dualism is that?