This is a clear and convincing account of the intuitions that lead to people either accepting or denying the existence of the Hard Problem. I’m squarely in Camp #1, and while I think the broad strokes are correct there are two places where I think this account gets Camp #1 a little wrong on the details.
According to Camp #1, the correct explanandum is still “I claim to have experienced X” (where X is the apparent experience). After all, if we can explain exactly why you, as a physical system, uttered the words “I experienced X”, then there’s nothing else to explain. […] In other words, the two camps disagree about the epistemic status of apparently perceived experiences: for Camp #2, they’re epistemic bedrock, whereas for Camp #1, they’re model outputs of your brain, and like all model outputs of your brain, they can be wrong.
I think this is conflating two difference senses of ‘claim’. The first sense is the interpersonal or speech sense: John makes a claim to you about his internal experience, in the form of speech. In this sense, ‘John claims to have a headache’ is the correct explanandum, in the Camp #1 view, of John telling you he has a headache, because it’s the closest thing to John’s actual experience that you have access to.
However, there is something different going on in the case where you yourself seem to have had an experience. You can believe you have had a certain experience without telling anybody about it, or without even uttering the words ‘I experienced X’ into an empty room, so the interpersonal or speech sense of ‘claim’ doesn’t really seem to apply. This only leaves us with the sense of ‘making a claim to yourself’, which might more precisely be called ‘thinking’ or ‘believing’.
Even in the Camp #1 view, there really is something different about a claim you make to yourself. You have privileged access to the contents of your own mind that you don’t have to contents of other people’s minds, by virtue of the mundane physical fact that the neurones in your brain are connected to the other neurones in your brain but not to the neurones in other people’s brains. Even if you don’t utter the words ‘I experienced X’, there is still something to be explained that lies between ‘actually experiencing X’ and ‘claiming in speech to have experienced X: why did you have the thought or belief ‘I experienced X’, instead of ‘I didn’t experience X but it would be useful for me to lie about it’? The explanandum in the case of your own experience is located a little deeper than it is in the case of the experiences of others. You can still be wrong about the underlying reality of your experiences – perhaps the memory of having a headache was falsely implanted with nefarious technology – but you have access to a type of evidence about it that John does not.
(I’ve never been able to figure out if Thomas Nagel, in ‘What is it like to be a bat?’, believes that the mere existence of this sort of privileged evidence about one’s own experiences tells us something about the nature of qualia/subjectivity. He says ‘The point of view in question is not one accessible only to a single individual. Rather it is a type.’ But, from my Camp #1 perspective, he never seems to explain what the difference is.)
So consciousness will be a densely connected part of this network – no more, no less – and it will have fuzzy boundaries because there is, ultimately, no ground truth as to what does or doesn’t constitute consciousness.
Perhaps this overly nit-picky, but I don’t believe Camp #1 intuitions imply that consciousness is or arises from a particular ‘part’ of the brain, in the sense that you could say ‘it comes from the neurones in this region’ or ‘it comes from the subset of neurones lighting up on this fMRI’, even allowing fuzzy boundaries. There’s no reason to expect the physical substrate of the brain, or even the network topology of the its connections, to always map straightforwardly to some feature or property of the mind, and particularly not for more abstract and higher-level properties. Sometimes there is such an obvious mapping (e.g. visual pathways), but there’s no more reason to expect that there is a ‘consciousness part of the brain’ than a ‘reasoning part of the brain’ or an ‘optimising part of the brain’; it might just be a thing that the whole brain is or does. By analogy, you might be able to point to a particular bit of circuitry in a computer that processes raw data from a camera sensor, but you can’t point to any one part and say ‘this is where the operating system comes from’.
The upshot is the same: Camp #1 will view consciousness as an ‘inherently fuzzy phenomenon’. We might just find it to be even fuzzier than you suggest here.
You can still be wrong about the underlying reality of your experiences – perhaps the memory of having a headache was falsely implanted with nefarious technology – but you have access to a type of evidence about it that John does not.
But presumably everyone in camp 2 will agree that memories are not perfectly reliable and that memories of experiences are different from those experiences themselves. We could be misremembering. The actually interesting case is whether you can be wrong about having certain experiences now, such that no memory is involved.
Say, you are having a strong headache. Here the headache itself seems to be the evidence. Which seems to mean you can’t be mistaken about currently having a headache.
You’re absolutely right that this is the more interesting case. I intentionally chose the past tense to make it easier to focus on the details of the example rather than the Camp #1/Camp #2 distinction per se. For completeness, I’ll try to recapitulate my understanding of Rafael’s account for the present-tense case ‘I have a headache right now’.
From my Camp #1 perspective, any mechanistic description of the brain that explained why it generated the thought/belief/utterance ‘I have a headache right now’ instead of ‘I don’t have a headache right now’ in response to a given set of inputs would be a fully satisfying explanation. Perhaps it really is impossible for a human brain to generate the output ‘I have a headache right now’ without meeting some objective definition of a headache (some collection of facts about sensory inputs and brain state that distinguishes a headache from e.g. a stubbed toe), but there doesn’t seem to be any reason why this impossibility could not be a mundane fact conditional on the physical details of human brains. The brain is taking some combination of inputs, which might include external sensory data as well as introspective data about its own state, and generating a thought/belief/utterance output. It doesn’t seem impossible in principle that, by tweaking certain connections or using TMS or whatever, the mapping between these inputs and outputs could be altered such that the brain reliably generates the output ‘I don’t have a headache right now’ in situations where the chosen objective definition of ‘having a headache’ holds true. So, for Camp #1 the explanandum really is the output ‘I have a headache right now’. (The purpose of my comment was to expand the definition of ‘output’ to explicitly include thoughts and beliefs as well as utterances, and to acknowledge that the inputs in the case ‘I have a headache’ really are different to those in the case ‘John says he has a headache’.)
Camp #2 would say that it is impossible even in principle to be mistaken about the experience of having a headache. They might say it is impossible to meaningfully define ‘having a headache’ only in terms of sensory and/or introspective inputs to the brain. In their view, there is a sort of hard, irreducible kernel of experiencing-a-headache-subjective-qualia-stuff which is closely entangled with the objective inputs and outputs (they would agree that you are more likely to experience a headache if you were hit on the head with a hammer, and more likely to say ‘I have a headache’ if you were experiencing a headache), but nevertheless exists independent from and in addition to these objective facts and is not reducible to an account of only the inputs, outputs, and mapping between them. The explanandum, in their view, is the subjective-qualia-stuff. Camp #2 would fully admit that it’s really difficult to pin down the nature of the subjective-qualia-stuff; that’s why it’s a Hard Problem.
I’ve done my best here to represent Camp #2 accurately, but it’s difficult because their perspective is very alien to me. Apologies in advance to any Camp #2 people and happy to hear your corrections.
Okay, so you are saying that in the first-person case, the evidence for having a headache is not itself the experience of having a headache, but the belief that you have the experience of having a headache. So according to you, one could be wrong about currently having a headache, namely when the aforementioned belief is false, when you have the belief but not the experience. Is this right?
If so, I see two problems with this.
Intuitively it doesn’t seem possible to be wrong about one’s own current mental states. Imagine a patient complains to a doctor about having a terrible headache. The doctor replies: “You may be sure you are having a terrible headache, but maybe you are wrong and actually don’t have a headache at all.” Or a psychiatrist: “I’m sure you aren’t lying, but you may yourself be mistaken about being depressed right now, maybe you are actually perfectly happy”. These cases seem absurd. I don’t remember any case where I considered myself being wrong about a current mental state. We don’t say: I just thought I was feeling pain, but actually I didn’t.
A belief seems to be itself a mental state. So even if you add the belief as an intermediary layer of evidence between the agent and their experience, then you still have something which the agent is infallible about: Their belief. The evidence for having a belief would be the belief itself. Beliefs seem to be different from utterances, in that the latter are mechanistically describable third person events (sound waves), while beliefs seem to be just as mental as experiences. So the explanandum, the evidence, would in both cases be something mental. But it seems you require the explanandum to be something “objective”, like an utterance.
Okay, so you are saying that in the first-person case, the evidence for having a headache is not itself the experience of having a headache, but the belief that you have the experience of having a headache.
Not quite. I would say that in the first-person case, the explanandum – the thing that needs to be explained – is the belief (or thought, or utterance) that you have the experience of having a headache. Once you have explained how some particular set of inputs to the brain led to that particular output, you have explained everything that is going on, in the Camp #1 view. Quoting the original post, in the Camp #1 view ‘if we can explain exactly why you, as a physical system, uttered the words “I experienced X”, then there’s nothing else to explain.’
So according to you, one could be wrong about currently having a headache, namely when the aforementioned belief is false, when you have the belief but not the experience. Is this right?
I would actually agree that ’you can’t be mistaken about your own current experiences’, but I think the problem Rafael’s post points out is that Camp #1 and Camp #2 would understand that to mean different things.
Intuitively it doesn’t seem possible to be wrong about one’s own current mental states.
I’m a bit confused about what you mean by ‘mental states’. It’s certainly possible to be wrong about one’s own current mental state, as I understand the term; people experiencing psychosis usually firmly believe they are not psychotic. I don’t think the two Camps would disagree on this.
The three examples you mention, of having a headache, being depressed (by which I assume you mean feeling down rather than the psychiatric condition specifically), and feeling pain, all seem like examples of subjective experiences. Insofar as this paragraph is saying ‘it’s not possible to be wrong about your own subjective experience’, I would agree, with the caveat as above that what I think this means might be different to what a Camp #2 person thinks this means.
So the explanandum, the evidence, would in both cases be something mental. But it seems you require the explanandum to be something “objective”, like an utterance.
I don’t require the explanandum to be an utterance, and I don’t think there’s any important sense in which an utterance is more objective than a thought or belief. My original comment was intended only to point out that in the first-person case you have privileged access to certain data, namely the contents of your own mind, that you don’t have in the third-person case. The reasons for this are completely mundane and conditional on the current state of affairs, namely that we currently have no practical way of accessing the semantic content inside each others’ skulls other than via speech. It’s possible to imagine technology that might change this state of affairs, like a highly accurate thought-reading device for example.
I do think the explanandum is required to be an output, because being able to explain or predict the output is the test of your model of what is going on. If you predict ‘this person is going to say they don’t have a headache’, and the person says ‘I have a headache’, then there’s something wrong with your model.
I don’t require the explanandum to be an utterance, and I don’t think there’s any important sense in which an utterance is more objective than a thought or belief.
I think this is the crucial point of contention. I find the following obvious: thoughts or beliefs are on the same subjective level as experiences, which is quite different from utterances, which are purely mechanical third-person events, similar to the movement of a limb. In your view however, if I’m not misunderstanding you, beliefs are more similar to utterances than to experiences. So while I think beliefs are equally hard to explain as experiences, in your view beliefs are about as easy to explain as utterances. Is this a fair characterization?
The reason I think utterances are “easy” to explain is that they are physical events and therefore obviously allow for a mechanistic third-person explanation. The explanation would not in principle be different from explaining a simple spinal reflex. Nerve inputs somehow cause nerve outputs, except that for an utterance there are orders of magnitude more neurons involved, which makes the explanation much harder in practice. But the principle is the same.
For subjective attitudes like beliefs and experiences the explanandum is not just a mouth movement (as in the case of utterances) which would be directly caused by nervous signals. It is unclear how to even grasp subjective beliefs and experiences in a mechanical language of cause and effect. As an illustration, it is not obvious why an organism couldn’t theoretically be a p-zombie—have the usual neuronal configuration, behave completely normally, do all the same utterances—without having any subjective beliefs or experiences.
(It seems vaguely plausible to me that for beliefs and experiences, a reductive, rather than causal, explanation would be needed. Yet the model of other reductive explanations in science, like explaining the temperature of a gas with the average kinetic energy of the particles it is made out off, doesn’t obviously fit what would be needed in the case of mental states. But this is a longer story.)
Huh, this is interesting. I wouldn’t have suspected this to be the crux. I’m not sure how well this maps to the Camp 1 vs 2 difference as opposed to idiosyncratic differences in our own views.
In your view however, if I’m not misunderstanding you, beliefs are more similar to utterances than to experiences. So while I think beliefs are equally hard to explain as experiences, in your view beliefs are about as easy to explain as utterances. Is this a fair characterization?
This is a fair characterisation, though I don’t think ease of explanation is a crucial point. I would certainly say that beliefs are more similar to utterances than to experiences. To illustrate this, sitting here now on the surface of Earth I think it’s possible for me to produce an utterance that is about conditions at the centre of Jupiter, and I think it’s possible for me to have a belief or a thought that is about conditions at the centre of Jupiter, and all of these could stand in a truth relation to what conditions are actually like at the centre of Jupiter. I don’t think I can have an experience that is about conditions at the centre of Jupiter. Strictly, I don’t think I can have an experience that is ‘about’ anything. I don’t think experiences are models of the world, in the way that utterances, beliefs, and thoughts can be. This is why I would agree that it is not possible to be mistaken about an experience, though in everyday language we often elide experiences with claims about the world that do have truth values (‘it looks red’ almost always means ‘I believe it is actually red’, not ‘when I look at it I experience seeing red but maybe that’s just a hallucination’).
I find the following obvious: thoughts or beliefs are on the same subjective level as experiences,
What do you see as the important difference between ‘subjective’ and ‘objective’? Is subjectivity about who has access to a phenomenon, or is it a quality of the phenomenon itself?
The reason I think utterances are “easy” to explain is that they are physical events and therefore obviously allow for a mechanistic third-person explanation. The explanation would not in principle be different from explaining a simple spinal reflex. Nerve inputs somehow cause nerve outputs, except that for an utterance there are orders of magnitude more neurons involved, which makes the explanation much harder in practice. But the principle is the same.
I agree with this.
It is unclear how to even grasp subjective beliefs and experiences in a mechanical language of cause and effect.
If for the sake of argument we strike out ‘beliefs’ here and make it just about experiences, this seems to be a restatement of the Camp 1 vs 2 distinction. As a Camp 1 person, a mechanical explanation of whatever chain of events leads me to think or say that I have a headache would fully dissolve the question. I wouldn’t feel that there is anything left to explain. From what I understand of Camp 2, even given such an explanation they would still feel there is something left to explain, namely how these objective facts come together to produce subjective experience.
Mental states do not need to be “about” something, but it is pretty clear they can be. One can be just happy, but it seems one can also be happy about something. One certainly can wish for something, or fear that something is the case, or hope for it, etc. The form in the following is the same: the belief that x, the desire that x, the fear that x, the hope that x. Here x is a proposition. In case of e.g. loving x or hating x, x is an object, not a proposition, but again the mental state is about something. These states seem all hard to explain in a way that utterances aren’t.
What do you see as the important difference between ‘subjective’ and ‘objective’? Is subjectivity about who has access to a phenomenon, or is it a quality of the phenomenon itself?
The relevant difference here is the access. The “subjective” is exactly that which an agent is directly acquainted with, while the “objective” stuff is only inferred indirectly. It is unclear how one could explain one with the other.
As a Camp 1 person, a mechanical explanation of whatever chain of events leads me to think or say that I have a headache would fully dissolve the question. I wouldn’t feel that there is anything left to explain.
As I said, it is unclear how such a mechanical explanation of a thought or belief would look like. It is clear that utterances are caused by mouth movements which are caused by neurons firing, but it is not clear how neurons could “cause” a belief, or how to otherwise (e.g. reductively) explain a belief. It is not clear how to distinguish p-zombies from normal people, or explain why they wouldn’t be possible.
Mental states do not need to be “about” something, but it is pretty clear they can be.
I’m still a bit confused by what you mean by ‘mental states’. My best guess is that you are using it as a catch-all term for everything that is or might be going on in the brain, which includes experiences, beliefs, thoughts, and more general states like ‘psychotic’ or ‘relaxed’.
I agree that mental states do not need to be about something, but I think beliefs do need to be about something and thoughts can be about something (propositional in the way you describe). I don’t think an experience can be propositional. I don’t understand this relates to whether these particular mental states are able to be explained.
It is clear that utterances are caused by mouth movements which are caused by neurons firing, but it is not clear how neurons could “cause” a belief, or how to otherwise (e.g. reductively) explain a belief.
My best account for what is going on here is that we have two interacting intuitive disagreements:
The ‘ordinary’ Camp 1 vs 2 disagreement, as outlined in Rafael’s post, where we disagree where the explanandum lies in the case of subjective experience.
A disagreement over whether whatever special properties subjective experience has also extend to other mental phenomena like beliefs, such that in the Camp 2 view there would be a Hard Problem of why and how we have beliefs analogous to or identical with the Hard Problem of why and how we have subjective experience.
I’m still a bit confused by what you mean by ‘mental states’. My best guess is that you are using it as a catch-all term for everything that is or might be going on in the brain, which includes experiences, beliefs, thoughts, and more general states like ‘psychotic’ or ‘relaxed’.
I would not count “psychotic” here, since one is not necessarily directly acquainted with it (one doesn’t necessarily know one has it).
I don’t think an experience can be propositional. I don’t understand this relates to whether these particular mental states are able to be explained.
I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences, or that they at least more similar to utterances than to experiences. I responded that aboutness (technical term: intentionality) doesn’t matter, as several things that are commonly regarded as qualia, just like experiences, can be about something, e.g. loves or fears. So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1.
I think the main disagreement is actually just one, the above: What counts as a simple explanandum such that we would not run into hard explanatory problems? My position is that only utterances act as such a simple explanandum, and that no subjective mental state (things we are directly acquainted with, like intentional states, emotions and experiences) is simple in this sense, since they are not obviously compatible with any causal explanation.
I would not count “psychotic” here, since one is not necessarily directly acquainted with it (one doesn’t necessarily know one has it).
Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’?
I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences
I don’t think there is any connection between whether a thought/belief/experience is about something and whether it is explainable. I’m not sure about ‘easier to explain’, but it doesn’t seem like the degree of easiness is a key issue here. I hold the vanilla Camp 1 view that everything the brain is doing is ultimately and completely explainable in physical terms.
or that they at least more similar to utterances than to experiences
I do think beliefs are more similar to utterances than experiences. If we were to draw an ontology of ‘things brains do’, utterances would probably be a closer sibling to thoughts than to beliefs, and perhaps a distant cousin to experiences. A thought can be propositional (‘the sky is blue’) or non-propositional (‘oh no!’), as can an utterance, but a belief is only propositional, while an experience is never propositional. I think an utterance could be reasonably characterised as a thought that is not content to stay swimming around in the brain but for whatever reason escapes out through the mouth. To be clear though, I don’t think any of this maps on to the question of whether these phenomena are explicable in terms of the physical implementation details of the brain.
So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1.
I think there is an in-principle difference between Camp 1 ‘accepting beliefs [or utterances] as explanandum’ and Camp 2 ‘accepting experiences as explanandum’. When you ask ‘What counts as a simple explanandum such that we would not run into hard explanatory problems?’, I think the disagreement between Camp 1 and Camp 2 in answering this question is not over ‘where the explanandum is’ so much as ‘what it would mean to explain it’.
It might help here to unpack the phrase ‘accepting beliefs as explanandum’ from the Camp 1 viewpoint. In a way this is a shorthand for ‘requiring a complete explanation of how the brain as a physical system goes from some starting state to the state of having the belief’. The belief or utterance as explanandum works as a shorthand for this for the reasons I mentioned above, i.e. that any explanation that does not account for how the brain ended up having this belief or generating this utterance is not a complete and satisfactory explanation. This doesn’t privilege either beliefs or utterances as special categories of things to be explained; they just happen to be end states that capture everything we think is worth explaining about something like ‘having a headache’ in particular circumstances like ‘forming a belief that I have a headache’ or ‘uttering the sentence “I have a headache”’.
By analogy, suppose that I was an air safety investigator investigating an incident in which the rudder of a passenger jet went into a sudden hardover. The most appropriate explanandum in this case is ‘the rudder going into a sudden hardover’, because any explanation that doesn’t end with ‘...and this causes the rudder to go into a sudden hardover’ is clearly unsatisfactory for my purposes. Suppose I then conduct a test flight in which the aircraft’s autopilot is disconnected from the rudder, and discover a set of conditions that reliably causes the autopilot to form an incorrect model of the state of the aircraft, such that if the autopilot was connected to the rudder it would command a sudden hardover to ‘correct’ the situation. It seems quite reasonable in this case for the explanandum to be ‘the autopilot forming an incorrect model of the state of the aircraft’. There is no conceptual difference in the type of explanation required in the two cases. They can both in principle be explained in terms of a physical chain of events, which in both cases would almost certainly include some sequence of computations inside the autopilot. The fact that the explanandum in the second case is a propositional representation internal to the autopilot rather than a physical movement of a rudder doesn’t pose any new conceptual mysteries. We’re just using the explanandum to define the scope of what we’re interested in explaining.
This is distinct from the Camp 2 view, in which even if you had a complete description of the physical steps involved in forming the belief or utterance ‘I have a headache’, there would still be something left to explain, that is the subjective character of the experience of having a headache. When the Camp 2 view says that the experience itself is the explanandum, it does privilege subjective experience as a special category of things to be explained. This view asserts that experience has a property of subjectiveness that in our current understanding cannot be explained in terms of the physical steps, and it is this property of subjectiveness itself that demands a satisfactory explanation. When Camp 2 point to experience as explanandum, they’re not saying ‘it would be useful and satisfying to have an explanation of the physical sequence of events that lead up to this state’; they’re saying ‘there is something going on here that we don’t even know how to explain in terms of a physical sequence of events’. Quoting the original post, in this view ‘even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding.’
Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’?
Yeah, aware of, or conscious of. Psychosis seems to be less a mental state in this sense than a disposition to produce certain mental states.
Suppose I then conduct a test flight in which the aircraft’s autopilot is disconnected from the rudder, and discover a set of conditions that reliably causes the autopilot to form an incorrect model of the state of the aircraft, such that if the autopilot was connected to the rudder it would command a sudden hardover to ‘correct’ the situation. It seems quite reasonable in this case for the explanandum to be ‘the autopilot forming an incorrect model of the state of the aircraft’.
What you call “model” here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn’t provide a difference between the two. Explaining the neural correlate is of course just as “easy” as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn’t explain the belief/experience in question in terms of this correlate. It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person. So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn’t explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.
Apologies for the repetition, but I’m going to start by restating a slightly updated model of what I think is going on, because it provides the context for the rest of my comment. Basically I still think there are two elements to our disagreement:
The Camp 1 vs Camp 2 disagreement. Camp 1 thinks that a description of the physical system would completely and satisfactorily explain the nature of consciousness and subjective experience; Camp 2 thinks that there is a conceptual element of subjective experience that we don’t currently know how to explain in physical terms, even in principle. Camp 2 thinks there is a capital-H Hard Problem of consciousness, the ‘conceptual mystery’ in Rafael’s post; Camp1 does not. I am in Camp 1, and as best I can tell you are in Camp 2.
You think that all(?) ‘mental states’ pose this conceptual Hard Problem, including intentional phenomena like thoughts and beliefs as well as more ‘purely subjective’ phenomena like experiences. My impression is that this is a mildly unorthodox position within Camp 2, although as I mentioned in my original comment I’ve never really understood e.g. what Nagel was trying to say about the relationship between mental phenomena being only directly accessible to a single mind and them being Hard to explain, so I might be entirely wrong about this. In any case, because I don’t believe that there is a conceptual mystery in the first place, the question of (e.g.) whether the explanandum is an utterance vs a belief means something very different to me than it does to you. When I talk about locating the explanandum at utterances vs beliefs, I’m talking about the scope of the physical system to be explained. When you talk about it, you’re talking about the location(s) of the conceptual mystery.
What you call “model” here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn’t provide a difference between the two. Explaining the neural correlate is of course just as “easy” as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn’t explain the belief/experience in question in terms of this correlate.
As a Camp 1 person, I don’t think that there is any (non-semantic) difference between the observable neurological correlates of a belief or any other mental phenomenon and the phenomenon itself. Once we have a complete physical description of the system, we Camp 1-ites might bicker over exactly which bits of it correspond to ‘experience’ and ‘consciousness’, or perhaps claim that we have reductively dissolved such questions entirely; but we would agree that these are just arguments over definitions rather than pointing to anything actually left unexplained. I don’t think there is a Hard Problem.
It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person.
I take Dennett’s view on p-zombies, i.e. they are not conceivable.
So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn’t explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.
In the Camp 1 view, once you’ve explained the neural correlates, there is nothing left to explain; whether or not you have ‘explained the belief’ becomes an argument over definitions.
Intuitively it doesn’t seem possible to be wrong about one’s own current mental states. Imagine a patient complains to a doctor about having a terrible headache. The doctor replies: “You may be sure you are having a terrible headache, but maybe you are wrong and actually don’t have a headache at all.”
Of course it’s possible, at least in principle: the doctor could have connected all your neurons, that detect headache and generate thoughts about it, to another person’s neurons, that generate headache. Then you would be sure, that you are having a having a headache, but actually it is another person who is having a headache.
You can definetely be mistaken regarding what the headache means. When the headache is extreme you may feel as if you are dying. Yet, despite feeling this way you may not actually die.
Likewise you may feel as if your feelings are immaterial even though they are not. As soon as the question isn’t just about your immediate experience but also about how this experience is related to the world—you may very well be wrong.
You have privileged access to the contents of your own mind that you don’t have to contents of other people’s minds, by virtue of the mundane physical fact that the neurones in your brain are connected to the other neurones in your brain but not to the neurones in other people’s brains.
You don’t just have a level of access, you have a type of access. Your access to your own mind isn’t like looking at a brain scan.
I’ve never been able to figure out if Thomas Nagel, in ‘What is it like to be a bat?’, believes that the mere existence of this sort of privileged evidence about one’s own experiences tells us something about the nature of qualia/subjectivity.
The Mary’s Room thought experiment brings it out. Mary has complete access to someone elses mental state, from the outside, but still doesn’t experience it from the inside.
You don’t just have a level of access, you have a type of access. Your access to your own mind isn’t like looking at a brain scan.
From my Camp 1 perspective, this just seems like a restatement of what I wrote. My direct access to my own mind isn’t like my indirect access to other people’s minds; to understand another person’s mind, I can at best gather scraps of sensory data like ‘what that person is saying’ and try to piece them together into a model. My direct access to my own mind isn’t like looking at a brain scan of my own mind; to understand a brain scan, I need to gather sensory data like ‘what the monitor attached to the brain scanner shows’ and try to piece them into a model. This seems to be completely explained by the fact that my brain can only gather data about the external world though a handful of imperfect sensory channels, while it can gather data about its own internal processes through direct introspection. To make things worse, my brain is woefully underpowered for the task of modelling complex things like brains, so it’s almost inevitable that any model I construct will be imperfect. Even a scan of my own brain would give me far less insight into my mind than direct introspection, because brains are hideously complicated and I’m not well-equipped to model them.
Whether you call that a ‘level’ or ‘type’ of access, I’m still no closer to understanding how Nagel relates the (to me mundane) fact that these types of access exist to the ‘conceptual mystery’ of qualia or consciousness.
The Mary’s Room thought experiment brings it out. Mary has complete access to someone elses mental state, form the outside, but still doesn’t experience it from the inside.
Imagine a one-in-a-million genetic mutation that causes a human brain to develop a Simulation Centre. The Simulation Centre might be thought of as a massively overdeveloped form of whatever circuitry gives people mental imagery. It is able to simulate real-world physics with the fidelity of state-of-the-art computer physics simulations, video game 3D engines, etc. The Simulation Centre has direct neural connections to the brain’s visual pathways that, under voluntary control, can override the sensory stream from the eyes. So, while a person with strong mental imagery might be able to fuzzily visualise something like a red square, a person with the Simulation Centre mutation could examine sufficiently detailed blueprints for a building and have a vivid photorealistic visual experience of looking at it, indistinguishable from reality.
Poor Mary, locked in her black-and-white room, doesn’t have a Simulation Centre. No matter how much information she is given about what wavelengths correspond to the colour blue, she will never have the visual experience of looking at something blue. Lucky Sue, Mary’s sister, was born with the Simulation Centre mutation. Even locked in a neighbouring black-and-white room, when she learns about the existence of materials that don’t reflect all wavelengths of light but only some wavelengths, Sue decides to model such a material in her Simulation Centre, and so is able to experience looking at the colour blue.
In other words: the Mary’s Room thought experiment seems to me (again, from a Camp 1 perspective) to illustrate that our brains lack the machinery to turn a conceptual understanding of a complex physical system into subjective experience.[1] This seems like a mundane fact about our brains (‘we don’t have Simulation Centres’) rather than pointing to any fundamental conceptual mystery.
This might just be a matter of degree. Some people apparently can do things like visualise a red square, and it seems reasonable that a person who had seen shapes of almost every colour before but had never happened to see a red square could nevertheless visualise one if given the concept.
From my Camp 1 perspective, this just seems like a restatement of what I wrote. My direct access to my own mind isn’t like my indirect access to other people’s minds; to understand another person’s mind, I can at best gather scraps of sensory data like ‘what that person is saying’ and try to piece them together into a model
At this point, I can prove to you that you are actually in Camp #2. All I have to is point out that the kind of access you have to your mind is (or rather includes) qualia!
I’m still no closer to understanding how Nagel relates the (to me mundane) fact that these types of access exist to the ‘conceptual mystery’ of qualia or consciousness
The mystery relates entirely to the expectation that there should be a reductive physical explanation of qualia.
The Hard Problem of Qualia
Whilst science has helped with some aspects of the mind body problem, it has made others more difficult, or at least exposed their difficulty. In pre scientific times, people were happy to believe that the colour of an object was an intrinsic property of it, which was perceived to be as it was. This “naive realism”, was disrupted by a series of discoveries, such as the absence of anything resembling subjective colour in scientific descriptions, and a slew of reasons for recognising a subjective element in perception.
A philosopher’s stance on the fundamental nature of reality is called an ontology. The success of science in the twentieth and twentyfirst centuries has led many philosophers to adopt a physicalist ontology, basically the idea that the fundamental constituents of reality are what physics says they are. (It is a background assumption of physicalism that the sciences form a sort of tower, with psychology and sociology near the top, and biology and chemistry in the middle , and with physics at the bottom.
The higher and intermediate layers don’t have their own ontologies—mind-stuff and elan vital are outdated concepts—everything is either a fundamental particle, or an arrangement of fundamental particles)
So the problem of mind is now the problem of qualia, and the way philosophers want to explain it is physicallistically. However, the problem of explaining how brains give rise to subjective sensation, of explaining qualia in physical terms, is now considered to be The Hard Problem. In the words of David Chalmers:-
″ It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.”
What is hard about the hard problem is the requirement to explain consciousness, particularly conscious experience, in terms of a physical ontology. Its the combination of the two that makes it hard. Which is to say that the problem can be sidestepped by either denying consciousness, or adopting a non-physicalist ontology.
Examples of non-physical ontologies include dualism, panpsychism and idealism . These are not faced with the Hard Problem, as such, because they are able to say that subjective, qualia, just are what they are, without facing any need to offer a reductive explanation of them. But they have problems of their own, mainly that physicalism is so successful in other areas.
Eliminative materialism and illusionism, on the other hand, deny that there is anything to be explained, thereby implying there is no problem, But these approaches also remain unsatisfactory because of the compelling subjective evidence for consciousness.
Now, maybe Nagel doesn’t say all that, but he’s not the only occupant of camp #2.
Poor Mary, locked in her black-and-white room, doesn’t have a Simulation Centre. No matter how much information she is given about what wavelengths correspond to the colour blue, she will never have the visual experience of looking at something blue. Lucky Sue, Mary’s sister, was born with the Simulation Centre mutation. Even locked in a neighbouring black-and-white room, when she learns about the existence of materials that don’t reflect all wavelengths of light but only some wavelengths, Sue decides to model such a material in her Simulation Centre, and so is able to experience looking at the colour blue.
That doesn’t prove anything relevant, because Mary’s sister is not creating or using a reductive physical explanation. Maybe her visualisation abilities, and everybody elses , use non physical pixie dust. Nothing about her ability refutes that clam, because it’s an ability, not an explanation.
Physicalists sometimes respond to Mary’s Room by saying that one can not expect Mary actually to actually instantiate Red herself just by looking at a brain scan. It seems obvious to then that a physical description of brain state won’t convey what that state is like, because it doesn’t put you into that state. As an argument for physicalism, the strategy is to accept that qualia exist, but argue that they present no unexpected behaviour, or other difficulties for physicalism.
That is correct as stated but somewhat misleading: the problem is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it. Obviously, it is true a that a descirption of a brain state won’t put you into that brain state. But that doesn’t show that there is nothing unusual about qualia. The problem is that there in no other case does it seem necessary to instantiate a brain state in order to undertstand something.
If another version of Mary were shut up to learn everything about, say, nuclear fusion, the question “would she actually know about nuclear fusion” could only be answered “yes, of course....didn’t you just say she knows everything”? The idea that she would have to instantiate a fusion reaction within her own body in order to understand fusion is quite counterintuitive. Similarly, a description of photosynthesis will make you photosynthesise, and would not be needed for a complete understanding of photosynthesis.
In other words: the Mary’s Room thought experiment seems to me (again, from a Camp 1 perspective) to illustrate that our brains lack the machinery to turn a conceptual understanding of a complex physical system into subjective experience.[1] This seems like a mundane fact about our brains
The fact that we have experience at all is mundane...yet it has no explanation. Mundane and mysterious just aren’t opposites. We experience gravity all the time, but it’s still hard to understand.
Yes, but the actual explanation is obviously possible. One access is different from another because one is between regions of the brain via neurons, and the other is between brain and brain scan via vision. What part do you think is impossible to specify?
The problem is that there in no other case does it seem necessary to instantiate a brain state in order to undertstand something.
Riding a bicycle. And you need to instantiate a brain state to know anything—instantiating brain states is what it means for a brain to know something. The explanation for “why it seems to be unnecessary in other cases” is “people are bad at physics”.
Or you can use a sensible theory of knowledge where Mary understands everything about seeing red without seeing it and the explanation for “why it seems that she doesn’t understand” is “people are bad in distinguishing between being and knowing”.
I mean, there is physicalist explanation of everything about this scenario. You could have an arguments on the level of “but people find it confusing for a couple of seconds!” against physicality of anything from mirrors to levers.
And you need to instantiate a brain state to know anythinh
No, knowledge can be stored outisde brains.
Mary understands everything about seeing red without seeing it and the explanation for “why it seems that she doesn’t understand” is “people are bad in distinguishing between being and knowing”.
Or people insist by fiat that they are the same, when they are plainly different.
Yeah, I agree with both points. I edited the post to reflect it; for the whole brain vs parts thing I just added a sentence; for the kind of access thing I made it a footnote and also linked to your comment. As you said, it does seem like a refinement of the model rather than a contradiction, but it’s definitely important enough to bring up.
This is a clear and convincing account of the intuitions that lead to people either accepting or denying the existence of the Hard Problem. I’m squarely in Camp #1, and while I think the broad strokes are correct there are two places where I think this account gets Camp #1 a little wrong on the details.
I think this is conflating two difference senses of ‘claim’. The first sense is the interpersonal or speech sense: John makes a claim to you about his internal experience, in the form of speech. In this sense, ‘John claims to have a headache’ is the correct explanandum, in the Camp #1 view, of John telling you he has a headache, because it’s the closest thing to John’s actual experience that you have access to.
However, there is something different going on in the case where you yourself seem to have had an experience. You can believe you have had a certain experience without telling anybody about it, or without even uttering the words ‘I experienced X’ into an empty room, so the interpersonal or speech sense of ‘claim’ doesn’t really seem to apply. This only leaves us with the sense of ‘making a claim to yourself’, which might more precisely be called ‘thinking’ or ‘believing’.
Even in the Camp #1 view, there really is something different about a claim you make to yourself. You have privileged access to the contents of your own mind that you don’t have to contents of other people’s minds, by virtue of the mundane physical fact that the neurones in your brain are connected to the other neurones in your brain but not to the neurones in other people’s brains. Even if you don’t utter the words ‘I experienced X’, there is still something to be explained that lies between ‘actually experiencing X’ and ‘claiming in speech to have experienced X: why did you have the thought or belief ‘I experienced X’, instead of ‘I didn’t experience X but it would be useful for me to lie about it’? The explanandum in the case of your own experience is located a little deeper than it is in the case of the experiences of others. You can still be wrong about the underlying reality of your experiences – perhaps the memory of having a headache was falsely implanted with nefarious technology – but you have access to a type of evidence about it that John does not.
(I’ve never been able to figure out if Thomas Nagel, in ‘What is it like to be a bat?’, believes that the mere existence of this sort of privileged evidence about one’s own experiences tells us something about the nature of qualia/subjectivity. He says ‘The point of view in question is not one accessible only to a single individual. Rather it is a type.’ But, from my Camp #1 perspective, he never seems to explain what the difference is.)
Perhaps this overly nit-picky, but I don’t believe Camp #1 intuitions imply that consciousness is or arises from a particular ‘part’ of the brain, in the sense that you could say ‘it comes from the neurones in this region’ or ‘it comes from the subset of neurones lighting up on this fMRI’, even allowing fuzzy boundaries. There’s no reason to expect the physical substrate of the brain, or even the network topology of the its connections, to always map straightforwardly to some feature or property of the mind, and particularly not for more abstract and higher-level properties. Sometimes there is such an obvious mapping (e.g. visual pathways), but there’s no more reason to expect that there is a ‘consciousness part of the brain’ than a ‘reasoning part of the brain’ or an ‘optimising part of the brain’; it might just be a thing that the whole brain is or does. By analogy, you might be able to point to a particular bit of circuitry in a computer that processes raw data from a camera sensor, but you can’t point to any one part and say ‘this is where the operating system comes from’.
The upshot is the same: Camp #1 will view consciousness as an ‘inherently fuzzy phenomenon’. We might just find it to be even fuzzier than you suggest here.
But presumably everyone in camp 2 will agree that memories are not perfectly reliable and that memories of experiences are different from those experiences themselves. We could be misremembering. The actually interesting case is whether you can be wrong about having certain experiences now, such that no memory is involved.
Say, you are having a strong headache. Here the headache itself seems to be the evidence. Which seems to mean you can’t be mistaken about currently having a headache.
You’re absolutely right that this is the more interesting case. I intentionally chose the past tense to make it easier to focus on the details of the example rather than the Camp #1/Camp #2 distinction per se. For completeness, I’ll try to recapitulate my understanding of Rafael’s account for the present-tense case ‘I have a headache right now’.
From my Camp #1 perspective, any mechanistic description of the brain that explained why it generated the thought/belief/utterance ‘I have a headache right now’ instead of ‘I don’t have a headache right now’ in response to a given set of inputs would be a fully satisfying explanation. Perhaps it really is impossible for a human brain to generate the output ‘I have a headache right now’ without meeting some objective definition of a headache (some collection of facts about sensory inputs and brain state that distinguishes a headache from e.g. a stubbed toe), but there doesn’t seem to be any reason why this impossibility could not be a mundane fact conditional on the physical details of human brains. The brain is taking some combination of inputs, which might include external sensory data as well as introspective data about its own state, and generating a thought/belief/utterance output. It doesn’t seem impossible in principle that, by tweaking certain connections or using TMS or whatever, the mapping between these inputs and outputs could be altered such that the brain reliably generates the output ‘I don’t have a headache right now’ in situations where the chosen objective definition of ‘having a headache’ holds true. So, for Camp #1 the explanandum really is the output ‘I have a headache right now’. (The purpose of my comment was to expand the definition of ‘output’ to explicitly include thoughts and beliefs as well as utterances, and to acknowledge that the inputs in the case ‘I have a headache’ really are different to those in the case ‘John says he has a headache’.)
Camp #2 would say that it is impossible even in principle to be mistaken about the experience of having a headache. They might say it is impossible to meaningfully define ‘having a headache’ only in terms of sensory and/or introspective inputs to the brain. In their view, there is a sort of hard, irreducible kernel of experiencing-a-headache-subjective-qualia-stuff which is closely entangled with the objective inputs and outputs (they would agree that you are more likely to experience a headache if you were hit on the head with a hammer, and more likely to say ‘I have a headache’ if you were experiencing a headache), but nevertheless exists independent from and in addition to these objective facts and is not reducible to an account of only the inputs, outputs, and mapping between them. The explanandum, in their view, is the subjective-qualia-stuff. Camp #2 would fully admit that it’s really difficult to pin down the nature of the subjective-qualia-stuff; that’s why it’s a Hard Problem.
I’ve done my best here to represent Camp #2 accurately, but it’s difficult because their perspective is very alien to me. Apologies in advance to any Camp #2 people and happy to hear your corrections.
Okay, so you are saying that in the first-person case, the evidence for having a headache is not itself the experience of having a headache, but the belief that you have the experience of having a headache. So according to you, one could be wrong about currently having a headache, namely when the aforementioned belief is false, when you have the belief but not the experience. Is this right?
If so, I see two problems with this.
Intuitively it doesn’t seem possible to be wrong about one’s own current mental states. Imagine a patient complains to a doctor about having a terrible headache. The doctor replies: “You may be sure you are having a terrible headache, but maybe you are wrong and actually don’t have a headache at all.” Or a psychiatrist: “I’m sure you aren’t lying, but you may yourself be mistaken about being depressed right now, maybe you are actually perfectly happy”. These cases seem absurd. I don’t remember any case where I considered myself being wrong about a current mental state. We don’t say: I just thought I was feeling pain, but actually I didn’t.
A belief seems to be itself a mental state. So even if you add the belief as an intermediary layer of evidence between the agent and their experience, then you still have something which the agent is infallible about: Their belief. The evidence for having a belief would be the belief itself. Beliefs seem to be different from utterances, in that the latter are mechanistically describable third person events (sound waves), while beliefs seem to be just as mental as experiences. So the explanandum, the evidence, would in both cases be something mental. But it seems you require the explanandum to be something “objective”, like an utterance.
Not quite. I would say that in the first-person case, the explanandum – the thing that needs to be explained – is the belief (or thought, or utterance) that you have the experience of having a headache. Once you have explained how some particular set of inputs to the brain led to that particular output, you have explained everything that is going on, in the Camp #1 view. Quoting the original post, in the Camp #1 view ‘if we can explain exactly why you, as a physical system, uttered the words “I experienced X”, then there’s nothing else to explain.’
I would actually agree that ’you can’t be mistaken about your own current experiences’, but I think the problem Rafael’s post points out is that Camp #1 and Camp #2 would understand that to mean different things.
I’m a bit confused about what you mean by ‘mental states’. It’s certainly possible to be wrong about one’s own current mental state, as I understand the term; people experiencing psychosis usually firmly believe they are not psychotic. I don’t think the two Camps would disagree on this.
The three examples you mention, of having a headache, being depressed (by which I assume you mean feeling down rather than the psychiatric condition specifically), and feeling pain, all seem like examples of subjective experiences. Insofar as this paragraph is saying ‘it’s not possible to be wrong about your own subjective experience’, I would agree, with the caveat as above that what I think this means might be different to what a Camp #2 person thinks this means.
I don’t require the explanandum to be an utterance, and I don’t think there’s any important sense in which an utterance is more objective than a thought or belief. My original comment was intended only to point out that in the first-person case you have privileged access to certain data, namely the contents of your own mind, that you don’t have in the third-person case. The reasons for this are completely mundane and conditional on the current state of affairs, namely that we currently have no practical way of accessing the semantic content inside each others’ skulls other than via speech. It’s possible to imagine technology that might change this state of affairs, like a highly accurate thought-reading device for example.
I do think the explanandum is required to be an output, because being able to explain or predict the output is the test of your model of what is going on. If you predict ‘this person is going to say they don’t have a headache’, and the person says ‘I have a headache’, then there’s something wrong with your model.
I think this is the crucial point of contention. I find the following obvious: thoughts or beliefs are on the same subjective level as experiences, which is quite different from utterances, which are purely mechanical third-person events, similar to the movement of a limb. In your view however, if I’m not misunderstanding you, beliefs are more similar to utterances than to experiences. So while I think beliefs are equally hard to explain as experiences, in your view beliefs are about as easy to explain as utterances. Is this a fair characterization?
The reason I think utterances are “easy” to explain is that they are physical events and therefore obviously allow for a mechanistic third-person explanation. The explanation would not in principle be different from explaining a simple spinal reflex. Nerve inputs somehow cause nerve outputs, except that for an utterance there are orders of magnitude more neurons involved, which makes the explanation much harder in practice. But the principle is the same.
For subjective attitudes like beliefs and experiences the explanandum is not just a mouth movement (as in the case of utterances) which would be directly caused by nervous signals. It is unclear how to even grasp subjective beliefs and experiences in a mechanical language of cause and effect. As an illustration, it is not obvious why an organism couldn’t theoretically be a p-zombie—have the usual neuronal configuration, behave completely normally, do all the same utterances—without having any subjective beliefs or experiences.
(It seems vaguely plausible to me that for beliefs and experiences, a reductive, rather than causal, explanation would be needed. Yet the model of other reductive explanations in science, like explaining the temperature of a gas with the average kinetic energy of the particles it is made out off, doesn’t obviously fit what would be needed in the case of mental states. But this is a longer story.)
Huh, this is interesting. I wouldn’t have suspected this to be the crux. I’m not sure how well this maps to the Camp 1 vs 2 difference as opposed to idiosyncratic differences in our own views.
This is a fair characterisation, though I don’t think ease of explanation is a crucial point. I would certainly say that beliefs are more similar to utterances than to experiences. To illustrate this, sitting here now on the surface of Earth I think it’s possible for me to produce an utterance that is about conditions at the centre of Jupiter, and I think it’s possible for me to have a belief or a thought that is about conditions at the centre of Jupiter, and all of these could stand in a truth relation to what conditions are actually like at the centre of Jupiter. I don’t think I can have an experience that is about conditions at the centre of Jupiter. Strictly, I don’t think I can have an experience that is ‘about’ anything. I don’t think experiences are models of the world, in the way that utterances, beliefs, and thoughts can be. This is why I would agree that it is not possible to be mistaken about an experience, though in everyday language we often elide experiences with claims about the world that do have truth values (‘it looks red’ almost always means ‘I believe it is actually red’, not ‘when I look at it I experience seeing red but maybe that’s just a hallucination’).
What do you see as the important difference between ‘subjective’ and ‘objective’? Is subjectivity about who has access to a phenomenon, or is it a quality of the phenomenon itself?
I agree with this.
If for the sake of argument we strike out ‘beliefs’ here and make it just about experiences, this seems to be a restatement of the Camp 1 vs 2 distinction. As a Camp 1 person, a mechanical explanation of whatever chain of events leads me to think or say that I have a headache would fully dissolve the question. I wouldn’t feel that there is anything left to explain. From what I understand of Camp 2, even given such an explanation they would still feel there is something left to explain, namely how these objective facts come together to produce subjective experience.
Mental states do not need to be “about” something, but it is pretty clear they can be. One can be just happy, but it seems one can also be happy about something. One certainly can wish for something, or fear that something is the case, or hope for it, etc. The form in the following is the same: the belief that x, the desire that x, the fear that x, the hope that x. Here x is a proposition. In case of e.g. loving x or hating x, x is an object, not a proposition, but again the mental state is about something. These states seem all hard to explain in a way that utterances aren’t.
The relevant difference here is the access. The “subjective” is exactly that which an agent is directly acquainted with, while the “objective” stuff is only inferred indirectly. It is unclear how one could explain one with the other.
As I said, it is unclear how such a mechanical explanation of a thought or belief would look like. It is clear that utterances are caused by mouth movements which are caused by neurons firing, but it is not clear how neurons could “cause” a belief, or how to otherwise (e.g. reductively) explain a belief. It is not clear how to distinguish p-zombies from normal people, or explain why they wouldn’t be possible.
I’m still a bit confused by what you mean by ‘mental states’. My best guess is that you are using it as a catch-all term for everything that is or might be going on in the brain, which includes experiences, beliefs, thoughts, and more general states like ‘psychotic’ or ‘relaxed’.
I agree that mental states do not need to be about something, but I think beliefs do need to be about something and thoughts can be about something (propositional in the way you describe). I don’t think an experience can be propositional. I don’t understand this relates to whether these particular mental states are able to be explained.
My best account for what is going on here is that we have two interacting intuitive disagreements:
The ‘ordinary’ Camp 1 vs 2 disagreement, as outlined in Rafael’s post, where we disagree where the explanandum lies in the case of subjective experience.
A disagreement over whether whatever special properties subjective experience has also extend to other mental phenomena like beliefs, such that in the Camp 2 view there would be a Hard Problem of why and how we have beliefs analogous to or identical with the Hard Problem of why and how we have subjective experience.
Does this account seem accurate to you?
I would not count “psychotic” here, since one is not necessarily directly acquainted with it (one doesn’t necessarily know one has it).
I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences, or that they at least more similar to utterances than to experiences. I responded that aboutness (technical term: intentionality) doesn’t matter, as several things that are commonly regarded as qualia, just like experiences, can be about something, e.g. loves or fears. So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1.
I think the main disagreement is actually just one, the above: What counts as a simple explanandum such that we would not run into hard explanatory problems? My position is that only utterances act as such a simple explanandum, and that no subjective mental state (things we are directly acquainted with, like intentional states, emotions and experiences) is simple in this sense, since they are not obviously compatible with any causal explanation.
Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’?
I don’t think there is any connection between whether a thought/belief/experience is about something and whether it is explainable. I’m not sure about ‘easier to explain’, but it doesn’t seem like the degree of easiness is a key issue here. I hold the vanilla Camp 1 view that everything the brain is doing is ultimately and completely explainable in physical terms.
I do think beliefs are more similar to utterances than experiences. If we were to draw an ontology of ‘things brains do’, utterances would probably be a closer sibling to thoughts than to beliefs, and perhaps a distant cousin to experiences. A thought can be propositional (‘the sky is blue’) or non-propositional (‘oh no!’), as can an utterance, but a belief is only propositional, while an experience is never propositional. I think an utterance could be reasonably characterised as a thought that is not content to stay swimming around in the brain but for whatever reason escapes out through the mouth. To be clear though, I don’t think any of this maps on to the question of whether these phenomena are explicable in terms of the physical implementation details of the brain.
I think there is an in-principle difference between Camp 1 ‘accepting beliefs [or utterances] as explanandum’ and Camp 2 ‘accepting experiences as explanandum’. When you ask ‘What counts as a simple explanandum such that we would not run into hard explanatory problems?’, I think the disagreement between Camp 1 and Camp 2 in answering this question is not over ‘where the explanandum is’ so much as ‘what it would mean to explain it’.
It might help here to unpack the phrase ‘accepting beliefs as explanandum’ from the Camp 1 viewpoint. In a way this is a shorthand for ‘requiring a complete explanation of how the brain as a physical system goes from some starting state to the state of having the belief’. The belief or utterance as explanandum works as a shorthand for this for the reasons I mentioned above, i.e. that any explanation that does not account for how the brain ended up having this belief or generating this utterance is not a complete and satisfactory explanation. This doesn’t privilege either beliefs or utterances as special categories of things to be explained; they just happen to be end states that capture everything we think is worth explaining about something like ‘having a headache’ in particular circumstances like ‘forming a belief that I have a headache’ or ‘uttering the sentence “I have a headache”’.
By analogy, suppose that I was an air safety investigator investigating an incident in which the rudder of a passenger jet went into a sudden hardover. The most appropriate explanandum in this case is ‘the rudder going into a sudden hardover’, because any explanation that doesn’t end with ‘...and this causes the rudder to go into a sudden hardover’ is clearly unsatisfactory for my purposes. Suppose I then conduct a test flight in which the aircraft’s autopilot is disconnected from the rudder, and discover a set of conditions that reliably causes the autopilot to form an incorrect model of the state of the aircraft, such that if the autopilot was connected to the rudder it would command a sudden hardover to ‘correct’ the situation. It seems quite reasonable in this case for the explanandum to be ‘the autopilot forming an incorrect model of the state of the aircraft’. There is no conceptual difference in the type of explanation required in the two cases. They can both in principle be explained in terms of a physical chain of events, which in both cases would almost certainly include some sequence of computations inside the autopilot. The fact that the explanandum in the second case is a propositional representation internal to the autopilot rather than a physical movement of a rudder doesn’t pose any new conceptual mysteries. We’re just using the explanandum to define the scope of what we’re interested in explaining.
This is distinct from the Camp 2 view, in which even if you had a complete description of the physical steps involved in forming the belief or utterance ‘I have a headache’, there would still be something left to explain, that is the subjective character of the experience of having a headache. When the Camp 2 view says that the experience itself is the explanandum, it does privilege subjective experience as a special category of things to be explained. This view asserts that experience has a property of subjectiveness that in our current understanding cannot be explained in terms of the physical steps, and it is this property of subjectiveness itself that demands a satisfactory explanation. When Camp 2 point to experience as explanandum, they’re not saying ‘it would be useful and satisfying to have an explanation of the physical sequence of events that lead up to this state’; they’re saying ‘there is something going on here that we don’t even know how to explain in terms of a physical sequence of events’. Quoting the original post, in this view ‘even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding.’
Yeah, aware of, or conscious of. Psychosis seems to be less a mental state in this sense than a disposition to produce certain mental states.
What you call “model” here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn’t provide a difference between the two. Explaining the neural correlate is of course just as “easy” as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn’t explain the belief/experience in question in terms of this correlate. It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person. So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn’t explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.
Apologies for the repetition, but I’m going to start by restating a slightly updated model of what I think is going on, because it provides the context for the rest of my comment. Basically I still think there are two elements to our disagreement:
The Camp 1 vs Camp 2 disagreement. Camp 1 thinks that a description of the physical system would completely and satisfactorily explain the nature of consciousness and subjective experience; Camp 2 thinks that there is a conceptual element of subjective experience that we don’t currently know how to explain in physical terms, even in principle. Camp 2 thinks there is a capital-H Hard Problem of consciousness, the ‘conceptual mystery’ in Rafael’s post; Camp1 does not. I am in Camp 1, and as best I can tell you are in Camp 2.
You think that all(?) ‘mental states’ pose this conceptual Hard Problem, including intentional phenomena like thoughts and beliefs as well as more ‘purely subjective’ phenomena like experiences. My impression is that this is a mildly unorthodox position within Camp 2, although as I mentioned in my original comment I’ve never really understood e.g. what Nagel was trying to say about the relationship between mental phenomena being only directly accessible to a single mind and them being Hard to explain, so I might be entirely wrong about this. In any case, because I don’t believe that there is a conceptual mystery in the first place, the question of (e.g.) whether the explanandum is an utterance vs a belief means something very different to me than it does to you. When I talk about locating the explanandum at utterances vs beliefs, I’m talking about the scope of the physical system to be explained. When you talk about it, you’re talking about the location(s) of the conceptual mystery.
As a Camp 1 person, I don’t think that there is any (non-semantic) difference between the observable neurological correlates of a belief or any other mental phenomenon and the phenomenon itself. Once we have a complete physical description of the system, we Camp 1-ites might bicker over exactly which bits of it correspond to ‘experience’ and ‘consciousness’, or perhaps claim that we have reductively dissolved such questions entirely; but we would agree that these are just arguments over definitions rather than pointing to anything actually left unexplained. I don’t think there is a Hard Problem.
I take Dennett’s view on p-zombies, i.e. they are not conceivable.
In the Camp 1 view, once you’ve explained the neural correlates, there is nothing left to explain; whether or not you have ‘explained the belief’ becomes an argument over definitions.
Of course it’s possible, at least in principle: the doctor could have connected all your neurons, that detect headache and generate thoughts about it, to another person’s neurons, that generate headache. Then you would be sure, that you are having a having a headache, but actually it is another person who is having a headache.
You can definetely be mistaken regarding what the headache means. When the headache is extreme you may feel as if you are dying. Yet, despite feeling this way you may not actually die.
Likewise you may feel as if your feelings are immaterial even though they are not. As soon as the question isn’t just about your immediate experience but also about how this experience is related to the world—you may very well be wrong.
You don’t just have a level of access, you have a type of access. Your access to your own mind isn’t like looking at a brain scan.
The Mary’s Room thought experiment brings it out. Mary has complete access to someone elses mental state, from the outside, but still doesn’t experience it from the inside.
From my Camp 1 perspective, this just seems like a restatement of what I wrote. My direct access to my own mind isn’t like my indirect access to other people’s minds; to understand another person’s mind, I can at best gather scraps of sensory data like ‘what that person is saying’ and try to piece them together into a model. My direct access to my own mind isn’t like looking at a brain scan of my own mind; to understand a brain scan, I need to gather sensory data like ‘what the monitor attached to the brain scanner shows’ and try to piece them into a model. This seems to be completely explained by the fact that my brain can only gather data about the external world though a handful of imperfect sensory channels, while it can gather data about its own internal processes through direct introspection. To make things worse, my brain is woefully underpowered for the task of modelling complex things like brains, so it’s almost inevitable that any model I construct will be imperfect. Even a scan of my own brain would give me far less insight into my mind than direct introspection, because brains are hideously complicated and I’m not well-equipped to model them.
Whether you call that a ‘level’ or ‘type’ of access, I’m still no closer to understanding how Nagel relates the (to me mundane) fact that these types of access exist to the ‘conceptual mystery’ of qualia or consciousness.
Imagine a one-in-a-million genetic mutation that causes a human brain to develop a Simulation Centre. The Simulation Centre might be thought of as a massively overdeveloped form of whatever circuitry gives people mental imagery. It is able to simulate real-world physics with the fidelity of state-of-the-art computer physics simulations, video game 3D engines, etc. The Simulation Centre has direct neural connections to the brain’s visual pathways that, under voluntary control, can override the sensory stream from the eyes. So, while a person with strong mental imagery might be able to fuzzily visualise something like a red square, a person with the Simulation Centre mutation could examine sufficiently detailed blueprints for a building and have a vivid photorealistic visual experience of looking at it, indistinguishable from reality.
Poor Mary, locked in her black-and-white room, doesn’t have a Simulation Centre. No matter how much information she is given about what wavelengths correspond to the colour blue, she will never have the visual experience of looking at something blue. Lucky Sue, Mary’s sister, was born with the Simulation Centre mutation. Even locked in a neighbouring black-and-white room, when she learns about the existence of materials that don’t reflect all wavelengths of light but only some wavelengths, Sue decides to model such a material in her Simulation Centre, and so is able to experience looking at the colour blue.
In other words: the Mary’s Room thought experiment seems to me (again, from a Camp 1 perspective) to illustrate that our brains lack the machinery to turn a conceptual understanding of a complex physical system into subjective experience.[1] This seems like a mundane fact about our brains (‘we don’t have Simulation Centres’) rather than pointing to any fundamental conceptual mystery.
This might just be a matter of degree. Some people apparently can do things like visualise a red square, and it seems reasonable that a person who had seen shapes of almost every colour before but had never happened to see a red square could nevertheless visualise one if given the concept.
At this point, I can prove to you that you are actually in Camp #2. All I have to is point out that the kind of access you have to your mind is (or rather includes) qualia!
The mystery relates entirely to the expectation that there should be a reductive physical explanation of qualia.
The Hard Problem of Qualia Whilst science has helped with some aspects of the mind body problem, it has made others more difficult, or at least exposed their difficulty. In pre scientific times, people were happy to believe that the colour of an object was an intrinsic property of it, which was perceived to be as it was. This “naive realism”, was disrupted by a series of discoveries, such as the absence of anything resembling subjective colour in scientific descriptions, and a slew of reasons for recognising a subjective element in perception.
A philosopher’s stance on the fundamental nature of reality is called an ontology. The success of science in the twentieth and twentyfirst centuries has led many philosophers to adopt a physicalist ontology, basically the idea that the fundamental constituents of reality are what physics says they are. (It is a background assumption of physicalism that the sciences form a sort of tower, with psychology and sociology near the top, and biology and chemistry in the middle , and with physics at the bottom. The higher and intermediate layers don’t have their own ontologies—mind-stuff and elan vital are outdated concepts—everything is either a fundamental particle, or an arrangement of fundamental particles)
So the problem of mind is now the problem of qualia, and the way philosophers want to explain it is physicallistically. However, the problem of explaining how brains give rise to subjective sensation, of explaining qualia in physical terms, is now considered to be The Hard Problem. In the words of David Chalmers:-
″ It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.”
What is hard about the hard problem is the requirement to explain consciousness, particularly conscious experience, in terms of a physical ontology. Its the combination of the two that makes it hard. Which is to say that the problem can be sidestepped by either denying consciousness, or adopting a non-physicalist ontology.
Examples of non-physical ontologies include dualism, panpsychism and idealism . These are not faced with the Hard Problem, as such, because they are able to say that subjective, qualia, just are what they are, without facing any need to offer a reductive explanation of them. But they have problems of their own, mainly that physicalism is so successful in other areas.
Eliminative materialism and illusionism, on the other hand, deny that there is anything to be explained, thereby implying there is no problem, But these approaches also remain unsatisfactory because of the compelling subjective evidence for consciousness.
Now, maybe Nagel doesn’t say all that, but he’s not the only occupant of camp #2.
That doesn’t prove anything relevant, because Mary’s sister is not creating or using a reductive physical explanation. Maybe her visualisation abilities, and everybody elses , use non physical pixie dust. Nothing about her ability refutes that clam, because it’s an ability, not an explanation.
Physicalists sometimes respond to Mary’s Room by saying that one can not expect Mary actually to actually instantiate Red herself just by looking at a brain scan. It seems obvious to then that a physical description of brain state won’t convey what that state is like, because it doesn’t put you into that state. As an argument for physicalism, the strategy is to accept that qualia exist, but argue that they present no unexpected behaviour, or other difficulties for physicalism.
That is correct as stated but somewhat misleading: the problem is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it. Obviously, it is true a that a descirption of a brain state won’t put you into that brain state. But that doesn’t show that there is nothing unusual about qualia. The problem is that there in no other case does it seem necessary to instantiate a brain state in order to undertstand something.
If another version of Mary were shut up to learn everything about, say, nuclear fusion, the question “would she actually know about nuclear fusion” could only be answered “yes, of course....didn’t you just say she knows everything”? The idea that she would have to instantiate a fusion reaction within her own body in order to understand fusion is quite counterintuitive. Similarly, a description of photosynthesis will make you photosynthesise, and would not be needed for a complete understanding of photosynthesis.
The fact that we have experience at all is mundane...yet it has no explanation. Mundane and mysterious just aren’t opposites. We experience gravity all the time, but it’s still hard to understand.
And because there is a physicalist explanation for the difference of access, there is physicalist explanation for qualia and the problem is solved.
It is not an explanation to predict that one thing is different from another in an unspecified way.
Yes, but the actual explanation is obviously possible. One access is different from another because one is between regions of the brain via neurons, and the other is between brain and brain scan via vision. What part do you think is impossible to specify?
The qualia. How does a theory describe a subjective sensation?
Riding a bicycle. And you need to instantiate a brain state to know anything—instantiating brain states is what it means for a brain to know something. The explanation for “why it seems to be unnecessary in other cases” is “people are bad at physics”.
Or you can use a sensible theory of knowledge where Mary understands everything about seeing red without seeing it and the explanation for “why it seems that she doesn’t understand” is “people are bad in distinguishing between being and knowing”.
I mean, there is physicalist explanation of everything about this scenario. You could have an arguments on the level of “but people find it confusing for a couple of seconds!” against physicality of anything from mirrors to levers.
No, knowledge can be stored outisde brains.
Or people insist by fiat that they are the same, when they are plainly different.
Yeah, I agree with both points. I edited the post to reflect it; for the whole brain vs parts thing I just added a sentence; for the kind of access thing I made it a footnote and also linked to your comment. As you said, it does seem like a refinement of the model rather than a contradiction, but it’s definitely important enough to bring up.