I don’t require the explanandum to be an utterance, and I don’t think there’s any important sense in which an utterance is more objective than a thought or belief.
I think this is the crucial point of contention. I find the following obvious: thoughts or beliefs are on the same subjective level as experiences, which is quite different from utterances, which are purely mechanical third-person events, similar to the movement of a limb. In your view however, if I’m not misunderstanding you, beliefs are more similar to utterances than to experiences. So while I think beliefs are equally hard to explain as experiences, in your view beliefs are about as easy to explain as utterances. Is this a fair characterization?
The reason I think utterances are “easy” to explain is that they are physical events and therefore obviously allow for a mechanistic third-person explanation. The explanation would not in principle be different from explaining a simple spinal reflex. Nerve inputs somehow cause nerve outputs, except that for an utterance there are orders of magnitude more neurons involved, which makes the explanation much harder in practice. But the principle is the same.
For subjective attitudes like beliefs and experiences the explanandum is not just a mouth movement (as in the case of utterances) which would be directly caused by nervous signals. It is unclear how to even grasp subjective beliefs and experiences in a mechanical language of cause and effect. As an illustration, it is not obvious why an organism couldn’t theoretically be a p-zombie—have the usual neuronal configuration, behave completely normally, do all the same utterances—without having any subjective beliefs or experiences.
(It seems vaguely plausible to me that for beliefs and experiences, a reductive, rather than causal, explanation would be needed. Yet the model of other reductive explanations in science, like explaining the temperature of a gas with the average kinetic energy of the particles it is made out off, doesn’t obviously fit what would be needed in the case of mental states. But this is a longer story.)
Huh, this is interesting. I wouldn’t have suspected this to be the crux. I’m not sure how well this maps to the Camp 1 vs 2 difference as opposed to idiosyncratic differences in our own views.
In your view however, if I’m not misunderstanding you, beliefs are more similar to utterances than to experiences. So while I think beliefs are equally hard to explain as experiences, in your view beliefs are about as easy to explain as utterances. Is this a fair characterization?
This is a fair characterisation, though I don’t think ease of explanation is a crucial point. I would certainly say that beliefs are more similar to utterances than to experiences. To illustrate this, sitting here now on the surface of Earth I think it’s possible for me to produce an utterance that is about conditions at the centre of Jupiter, and I think it’s possible for me to have a belief or a thought that is about conditions at the centre of Jupiter, and all of these could stand in a truth relation to what conditions are actually like at the centre of Jupiter. I don’t think I can have an experience that is about conditions at the centre of Jupiter. Strictly, I don’t think I can have an experience that is ‘about’ anything. I don’t think experiences are models of the world, in the way that utterances, beliefs, and thoughts can be. This is why I would agree that it is not possible to be mistaken about an experience, though in everyday language we often elide experiences with claims about the world that do have truth values (‘it looks red’ almost always means ‘I believe it is actually red’, not ‘when I look at it I experience seeing red but maybe that’s just a hallucination’).
I find the following obvious: thoughts or beliefs are on the same subjective level as experiences,
What do you see as the important difference between ‘subjective’ and ‘objective’? Is subjectivity about who has access to a phenomenon, or is it a quality of the phenomenon itself?
The reason I think utterances are “easy” to explain is that they are physical events and therefore obviously allow for a mechanistic third-person explanation. The explanation would not in principle be different from explaining a simple spinal reflex. Nerve inputs somehow cause nerve outputs, except that for an utterance there are orders of magnitude more neurons involved, which makes the explanation much harder in practice. But the principle is the same.
I agree with this.
It is unclear how to even grasp subjective beliefs and experiences in a mechanical language of cause and effect.
If for the sake of argument we strike out ‘beliefs’ here and make it just about experiences, this seems to be a restatement of the Camp 1 vs 2 distinction. As a Camp 1 person, a mechanical explanation of whatever chain of events leads me to think or say that I have a headache would fully dissolve the question. I wouldn’t feel that there is anything left to explain. From what I understand of Camp 2, even given such an explanation they would still feel there is something left to explain, namely how these objective facts come together to produce subjective experience.
Mental states do not need to be “about” something, but it is pretty clear they can be. One can be just happy, but it seems one can also be happy about something. One certainly can wish for something, or fear that something is the case, or hope for it, etc. The form in the following is the same: the belief that x, the desire that x, the fear that x, the hope that x. Here x is a proposition. In case of e.g. loving x or hating x, x is an object, not a proposition, but again the mental state is about something. These states seem all hard to explain in a way that utterances aren’t.
What do you see as the important difference between ‘subjective’ and ‘objective’? Is subjectivity about who has access to a phenomenon, or is it a quality of the phenomenon itself?
The relevant difference here is the access. The “subjective” is exactly that which an agent is directly acquainted with, while the “objective” stuff is only inferred indirectly. It is unclear how one could explain one with the other.
As a Camp 1 person, a mechanical explanation of whatever chain of events leads me to think or say that I have a headache would fully dissolve the question. I wouldn’t feel that there is anything left to explain.
As I said, it is unclear how such a mechanical explanation of a thought or belief would look like. It is clear that utterances are caused by mouth movements which are caused by neurons firing, but it is not clear how neurons could “cause” a belief, or how to otherwise (e.g. reductively) explain a belief. It is not clear how to distinguish p-zombies from normal people, or explain why they wouldn’t be possible.
Mental states do not need to be “about” something, but it is pretty clear they can be.
I’m still a bit confused by what you mean by ‘mental states’. My best guess is that you are using it as a catch-all term for everything that is or might be going on in the brain, which includes experiences, beliefs, thoughts, and more general states like ‘psychotic’ or ‘relaxed’.
I agree that mental states do not need to be about something, but I think beliefs do need to be about something and thoughts can be about something (propositional in the way you describe). I don’t think an experience can be propositional. I don’t understand this relates to whether these particular mental states are able to be explained.
It is clear that utterances are caused by mouth movements which are caused by neurons firing, but it is not clear how neurons could “cause” a belief, or how to otherwise (e.g. reductively) explain a belief.
My best account for what is going on here is that we have two interacting intuitive disagreements:
The ‘ordinary’ Camp 1 vs 2 disagreement, as outlined in Rafael’s post, where we disagree where the explanandum lies in the case of subjective experience.
A disagreement over whether whatever special properties subjective experience has also extend to other mental phenomena like beliefs, such that in the Camp 2 view there would be a Hard Problem of why and how we have beliefs analogous to or identical with the Hard Problem of why and how we have subjective experience.
I’m still a bit confused by what you mean by ‘mental states’. My best guess is that you are using it as a catch-all term for everything that is or might be going on in the brain, which includes experiences, beliefs, thoughts, and more general states like ‘psychotic’ or ‘relaxed’.
I would not count “psychotic” here, since one is not necessarily directly acquainted with it (one doesn’t necessarily know one has it).
I don’t think an experience can be propositional. I don’t understand this relates to whether these particular mental states are able to be explained.
I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences, or that they at least more similar to utterances than to experiences. I responded that aboutness (technical term: intentionality) doesn’t matter, as several things that are commonly regarded as qualia, just like experiences, can be about something, e.g. loves or fears. So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1.
I think the main disagreement is actually just one, the above: What counts as a simple explanandum such that we would not run into hard explanatory problems? My position is that only utterances act as such a simple explanandum, and that no subjective mental state (things we are directly acquainted with, like intentional states, emotions and experiences) is simple in this sense, since they are not obviously compatible with any causal explanation.
I would not count “psychotic” here, since one is not necessarily directly acquainted with it (one doesn’t necessarily know one has it).
Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’?
I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences
I don’t think there is any connection between whether a thought/belief/experience is about something and whether it is explainable. I’m not sure about ‘easier to explain’, but it doesn’t seem like the degree of easiness is a key issue here. I hold the vanilla Camp 1 view that everything the brain is doing is ultimately and completely explainable in physical terms.
or that they at least more similar to utterances than to experiences
I do think beliefs are more similar to utterances than experiences. If we were to draw an ontology of ‘things brains do’, utterances would probably be a closer sibling to thoughts than to beliefs, and perhaps a distant cousin to experiences. A thought can be propositional (‘the sky is blue’) or non-propositional (‘oh no!’), as can an utterance, but a belief is only propositional, while an experience is never propositional. I think an utterance could be reasonably characterised as a thought that is not content to stay swimming around in the brain but for whatever reason escapes out through the mouth. To be clear though, I don’t think any of this maps on to the question of whether these phenomena are explicable in terms of the physical implementation details of the brain.
So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1.
I think there is an in-principle difference between Camp 1 ‘accepting beliefs [or utterances] as explanandum’ and Camp 2 ‘accepting experiences as explanandum’. When you ask ‘What counts as a simple explanandum such that we would not run into hard explanatory problems?’, I think the disagreement between Camp 1 and Camp 2 in answering this question is not over ‘where the explanandum is’ so much as ‘what it would mean to explain it’.
It might help here to unpack the phrase ‘accepting beliefs as explanandum’ from the Camp 1 viewpoint. In a way this is a shorthand for ‘requiring a complete explanation of how the brain as a physical system goes from some starting state to the state of having the belief’. The belief or utterance as explanandum works as a shorthand for this for the reasons I mentioned above, i.e. that any explanation that does not account for how the brain ended up having this belief or generating this utterance is not a complete and satisfactory explanation. This doesn’t privilege either beliefs or utterances as special categories of things to be explained; they just happen to be end states that capture everything we think is worth explaining about something like ‘having a headache’ in particular circumstances like ‘forming a belief that I have a headache’ or ‘uttering the sentence “I have a headache”’.
By analogy, suppose that I was an air safety investigator investigating an incident in which the rudder of a passenger jet went into a sudden hardover. The most appropriate explanandum in this case is ‘the rudder going into a sudden hardover’, because any explanation that doesn’t end with ‘...and this causes the rudder to go into a sudden hardover’ is clearly unsatisfactory for my purposes. Suppose I then conduct a test flight in which the aircraft’s autopilot is disconnected from the rudder, and discover a set of conditions that reliably causes the autopilot to form an incorrect model of the state of the aircraft, such that if the autopilot was connected to the rudder it would command a sudden hardover to ‘correct’ the situation. It seems quite reasonable in this case for the explanandum to be ‘the autopilot forming an incorrect model of the state of the aircraft’. There is no conceptual difference in the type of explanation required in the two cases. They can both in principle be explained in terms of a physical chain of events, which in both cases would almost certainly include some sequence of computations inside the autopilot. The fact that the explanandum in the second case is a propositional representation internal to the autopilot rather than a physical movement of a rudder doesn’t pose any new conceptual mysteries. We’re just using the explanandum to define the scope of what we’re interested in explaining.
This is distinct from the Camp 2 view, in which even if you had a complete description of the physical steps involved in forming the belief or utterance ‘I have a headache’, there would still be something left to explain, that is the subjective character of the experience of having a headache. When the Camp 2 view says that the experience itself is the explanandum, it does privilege subjective experience as a special category of things to be explained. This view asserts that experience has a property of subjectiveness that in our current understanding cannot be explained in terms of the physical steps, and it is this property of subjectiveness itself that demands a satisfactory explanation. When Camp 2 point to experience as explanandum, they’re not saying ‘it would be useful and satisfying to have an explanation of the physical sequence of events that lead up to this state’; they’re saying ‘there is something going on here that we don’t even know how to explain in terms of a physical sequence of events’. Quoting the original post, in this view ‘even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding.’
Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’?
Yeah, aware of, or conscious of. Psychosis seems to be less a mental state in this sense than a disposition to produce certain mental states.
Suppose I then conduct a test flight in which the aircraft’s autopilot is disconnected from the rudder, and discover a set of conditions that reliably causes the autopilot to form an incorrect model of the state of the aircraft, such that if the autopilot was connected to the rudder it would command a sudden hardover to ‘correct’ the situation. It seems quite reasonable in this case for the explanandum to be ‘the autopilot forming an incorrect model of the state of the aircraft’.
What you call “model” here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn’t provide a difference between the two. Explaining the neural correlate is of course just as “easy” as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn’t explain the belief/experience in question in terms of this correlate. It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person. So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn’t explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.
Apologies for the repetition, but I’m going to start by restating a slightly updated model of what I think is going on, because it provides the context for the rest of my comment. Basically I still think there are two elements to our disagreement:
The Camp 1 vs Camp 2 disagreement. Camp 1 thinks that a description of the physical system would completely and satisfactorily explain the nature of consciousness and subjective experience; Camp 2 thinks that there is a conceptual element of subjective experience that we don’t currently know how to explain in physical terms, even in principle. Camp 2 thinks there is a capital-H Hard Problem of consciousness, the ‘conceptual mystery’ in Rafael’s post; Camp1 does not. I am in Camp 1, and as best I can tell you are in Camp 2.
You think that all(?) ‘mental states’ pose this conceptual Hard Problem, including intentional phenomena like thoughts and beliefs as well as more ‘purely subjective’ phenomena like experiences. My impression is that this is a mildly unorthodox position within Camp 2, although as I mentioned in my original comment I’ve never really understood e.g. what Nagel was trying to say about the relationship between mental phenomena being only directly accessible to a single mind and them being Hard to explain, so I might be entirely wrong about this. In any case, because I don’t believe that there is a conceptual mystery in the first place, the question of (e.g.) whether the explanandum is an utterance vs a belief means something very different to me than it does to you. When I talk about locating the explanandum at utterances vs beliefs, I’m talking about the scope of the physical system to be explained. When you talk about it, you’re talking about the location(s) of the conceptual mystery.
What you call “model” here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn’t provide a difference between the two. Explaining the neural correlate is of course just as “easy” as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn’t explain the belief/experience in question in terms of this correlate.
As a Camp 1 person, I don’t think that there is any (non-semantic) difference between the observable neurological correlates of a belief or any other mental phenomenon and the phenomenon itself. Once we have a complete physical description of the system, we Camp 1-ites might bicker over exactly which bits of it correspond to ‘experience’ and ‘consciousness’, or perhaps claim that we have reductively dissolved such questions entirely; but we would agree that these are just arguments over definitions rather than pointing to anything actually left unexplained. I don’t think there is a Hard Problem.
It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person.
I take Dennett’s view on p-zombies, i.e. they are not conceivable.
So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn’t explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.
In the Camp 1 view, once you’ve explained the neural correlates, there is nothing left to explain; whether or not you have ‘explained the belief’ becomes an argument over definitions.
I think this is the crucial point of contention. I find the following obvious: thoughts or beliefs are on the same subjective level as experiences, which is quite different from utterances, which are purely mechanical third-person events, similar to the movement of a limb. In your view however, if I’m not misunderstanding you, beliefs are more similar to utterances than to experiences. So while I think beliefs are equally hard to explain as experiences, in your view beliefs are about as easy to explain as utterances. Is this a fair characterization?
The reason I think utterances are “easy” to explain is that they are physical events and therefore obviously allow for a mechanistic third-person explanation. The explanation would not in principle be different from explaining a simple spinal reflex. Nerve inputs somehow cause nerve outputs, except that for an utterance there are orders of magnitude more neurons involved, which makes the explanation much harder in practice. But the principle is the same.
For subjective attitudes like beliefs and experiences the explanandum is not just a mouth movement (as in the case of utterances) which would be directly caused by nervous signals. It is unclear how to even grasp subjective beliefs and experiences in a mechanical language of cause and effect. As an illustration, it is not obvious why an organism couldn’t theoretically be a p-zombie—have the usual neuronal configuration, behave completely normally, do all the same utterances—without having any subjective beliefs or experiences.
(It seems vaguely plausible to me that for beliefs and experiences, a reductive, rather than causal, explanation would be needed. Yet the model of other reductive explanations in science, like explaining the temperature of a gas with the average kinetic energy of the particles it is made out off, doesn’t obviously fit what would be needed in the case of mental states. But this is a longer story.)
Huh, this is interesting. I wouldn’t have suspected this to be the crux. I’m not sure how well this maps to the Camp 1 vs 2 difference as opposed to idiosyncratic differences in our own views.
This is a fair characterisation, though I don’t think ease of explanation is a crucial point. I would certainly say that beliefs are more similar to utterances than to experiences. To illustrate this, sitting here now on the surface of Earth I think it’s possible for me to produce an utterance that is about conditions at the centre of Jupiter, and I think it’s possible for me to have a belief or a thought that is about conditions at the centre of Jupiter, and all of these could stand in a truth relation to what conditions are actually like at the centre of Jupiter. I don’t think I can have an experience that is about conditions at the centre of Jupiter. Strictly, I don’t think I can have an experience that is ‘about’ anything. I don’t think experiences are models of the world, in the way that utterances, beliefs, and thoughts can be. This is why I would agree that it is not possible to be mistaken about an experience, though in everyday language we often elide experiences with claims about the world that do have truth values (‘it looks red’ almost always means ‘I believe it is actually red’, not ‘when I look at it I experience seeing red but maybe that’s just a hallucination’).
What do you see as the important difference between ‘subjective’ and ‘objective’? Is subjectivity about who has access to a phenomenon, or is it a quality of the phenomenon itself?
I agree with this.
If for the sake of argument we strike out ‘beliefs’ here and make it just about experiences, this seems to be a restatement of the Camp 1 vs 2 distinction. As a Camp 1 person, a mechanical explanation of whatever chain of events leads me to think or say that I have a headache would fully dissolve the question. I wouldn’t feel that there is anything left to explain. From what I understand of Camp 2, even given such an explanation they would still feel there is something left to explain, namely how these objective facts come together to produce subjective experience.
Mental states do not need to be “about” something, but it is pretty clear they can be. One can be just happy, but it seems one can also be happy about something. One certainly can wish for something, or fear that something is the case, or hope for it, etc. The form in the following is the same: the belief that x, the desire that x, the fear that x, the hope that x. Here x is a proposition. In case of e.g. loving x or hating x, x is an object, not a proposition, but again the mental state is about something. These states seem all hard to explain in a way that utterances aren’t.
The relevant difference here is the access. The “subjective” is exactly that which an agent is directly acquainted with, while the “objective” stuff is only inferred indirectly. It is unclear how one could explain one with the other.
As I said, it is unclear how such a mechanical explanation of a thought or belief would look like. It is clear that utterances are caused by mouth movements which are caused by neurons firing, but it is not clear how neurons could “cause” a belief, or how to otherwise (e.g. reductively) explain a belief. It is not clear how to distinguish p-zombies from normal people, or explain why they wouldn’t be possible.
I’m still a bit confused by what you mean by ‘mental states’. My best guess is that you are using it as a catch-all term for everything that is or might be going on in the brain, which includes experiences, beliefs, thoughts, and more general states like ‘psychotic’ or ‘relaxed’.
I agree that mental states do not need to be about something, but I think beliefs do need to be about something and thoughts can be about something (propositional in the way you describe). I don’t think an experience can be propositional. I don’t understand this relates to whether these particular mental states are able to be explained.
My best account for what is going on here is that we have two interacting intuitive disagreements:
The ‘ordinary’ Camp 1 vs 2 disagreement, as outlined in Rafael’s post, where we disagree where the explanandum lies in the case of subjective experience.
A disagreement over whether whatever special properties subjective experience has also extend to other mental phenomena like beliefs, such that in the Camp 2 view there would be a Hard Problem of why and how we have beliefs analogous to or identical with the Hard Problem of why and how we have subjective experience.
Does this account seem accurate to you?
I would not count “psychotic” here, since one is not necessarily directly acquainted with it (one doesn’t necessarily know one has it).
I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences, or that they at least more similar to utterances than to experiences. I responded that aboutness (technical term: intentionality) doesn’t matter, as several things that are commonly regarded as qualia, just like experiences, can be about something, e.g. loves or fears. So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1.
I think the main disagreement is actually just one, the above: What counts as a simple explanandum such that we would not run into hard explanatory problems? My position is that only utterances act as such a simple explanandum, and that no subjective mental state (things we are directly acquainted with, like intentional states, emotions and experiences) is simple in this sense, since they are not obviously compatible with any causal explanation.
Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’?
I don’t think there is any connection between whether a thought/belief/experience is about something and whether it is explainable. I’m not sure about ‘easier to explain’, but it doesn’t seem like the degree of easiness is a key issue here. I hold the vanilla Camp 1 view that everything the brain is doing is ultimately and completely explainable in physical terms.
I do think beliefs are more similar to utterances than experiences. If we were to draw an ontology of ‘things brains do’, utterances would probably be a closer sibling to thoughts than to beliefs, and perhaps a distant cousin to experiences. A thought can be propositional (‘the sky is blue’) or non-propositional (‘oh no!’), as can an utterance, but a belief is only propositional, while an experience is never propositional. I think an utterance could be reasonably characterised as a thought that is not content to stay swimming around in the brain but for whatever reason escapes out through the mouth. To be clear though, I don’t think any of this maps on to the question of whether these phenomena are explicable in terms of the physical implementation details of the brain.
I think there is an in-principle difference between Camp 1 ‘accepting beliefs [or utterances] as explanandum’ and Camp 2 ‘accepting experiences as explanandum’. When you ask ‘What counts as a simple explanandum such that we would not run into hard explanatory problems?’, I think the disagreement between Camp 1 and Camp 2 in answering this question is not over ‘where the explanandum is’ so much as ‘what it would mean to explain it’.
It might help here to unpack the phrase ‘accepting beliefs as explanandum’ from the Camp 1 viewpoint. In a way this is a shorthand for ‘requiring a complete explanation of how the brain as a physical system goes from some starting state to the state of having the belief’. The belief or utterance as explanandum works as a shorthand for this for the reasons I mentioned above, i.e. that any explanation that does not account for how the brain ended up having this belief or generating this utterance is not a complete and satisfactory explanation. This doesn’t privilege either beliefs or utterances as special categories of things to be explained; they just happen to be end states that capture everything we think is worth explaining about something like ‘having a headache’ in particular circumstances like ‘forming a belief that I have a headache’ or ‘uttering the sentence “I have a headache”’.
By analogy, suppose that I was an air safety investigator investigating an incident in which the rudder of a passenger jet went into a sudden hardover. The most appropriate explanandum in this case is ‘the rudder going into a sudden hardover’, because any explanation that doesn’t end with ‘...and this causes the rudder to go into a sudden hardover’ is clearly unsatisfactory for my purposes. Suppose I then conduct a test flight in which the aircraft’s autopilot is disconnected from the rudder, and discover a set of conditions that reliably causes the autopilot to form an incorrect model of the state of the aircraft, such that if the autopilot was connected to the rudder it would command a sudden hardover to ‘correct’ the situation. It seems quite reasonable in this case for the explanandum to be ‘the autopilot forming an incorrect model of the state of the aircraft’. There is no conceptual difference in the type of explanation required in the two cases. They can both in principle be explained in terms of a physical chain of events, which in both cases would almost certainly include some sequence of computations inside the autopilot. The fact that the explanandum in the second case is a propositional representation internal to the autopilot rather than a physical movement of a rudder doesn’t pose any new conceptual mysteries. We’re just using the explanandum to define the scope of what we’re interested in explaining.
This is distinct from the Camp 2 view, in which even if you had a complete description of the physical steps involved in forming the belief or utterance ‘I have a headache’, there would still be something left to explain, that is the subjective character of the experience of having a headache. When the Camp 2 view says that the experience itself is the explanandum, it does privilege subjective experience as a special category of things to be explained. This view asserts that experience has a property of subjectiveness that in our current understanding cannot be explained in terms of the physical steps, and it is this property of subjectiveness itself that demands a satisfactory explanation. When Camp 2 point to experience as explanandum, they’re not saying ‘it would be useful and satisfying to have an explanation of the physical sequence of events that lead up to this state’; they’re saying ‘there is something going on here that we don’t even know how to explain in terms of a physical sequence of events’. Quoting the original post, in this view ‘even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding.’
Yeah, aware of, or conscious of. Psychosis seems to be less a mental state in this sense than a disposition to produce certain mental states.
What you call “model” here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn’t provide a difference between the two. Explaining the neural correlate is of course just as “easy” as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn’t explain the belief/experience in question in terms of this correlate. It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person. So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn’t explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.
Apologies for the repetition, but I’m going to start by restating a slightly updated model of what I think is going on, because it provides the context for the rest of my comment. Basically I still think there are two elements to our disagreement:
The Camp 1 vs Camp 2 disagreement. Camp 1 thinks that a description of the physical system would completely and satisfactorily explain the nature of consciousness and subjective experience; Camp 2 thinks that there is a conceptual element of subjective experience that we don’t currently know how to explain in physical terms, even in principle. Camp 2 thinks there is a capital-H Hard Problem of consciousness, the ‘conceptual mystery’ in Rafael’s post; Camp1 does not. I am in Camp 1, and as best I can tell you are in Camp 2.
You think that all(?) ‘mental states’ pose this conceptual Hard Problem, including intentional phenomena like thoughts and beliefs as well as more ‘purely subjective’ phenomena like experiences. My impression is that this is a mildly unorthodox position within Camp 2, although as I mentioned in my original comment I’ve never really understood e.g. what Nagel was trying to say about the relationship between mental phenomena being only directly accessible to a single mind and them being Hard to explain, so I might be entirely wrong about this. In any case, because I don’t believe that there is a conceptual mystery in the first place, the question of (e.g.) whether the explanandum is an utterance vs a belief means something very different to me than it does to you. When I talk about locating the explanandum at utterances vs beliefs, I’m talking about the scope of the physical system to be explained. When you talk about it, you’re talking about the location(s) of the conceptual mystery.
As a Camp 1 person, I don’t think that there is any (non-semantic) difference between the observable neurological correlates of a belief or any other mental phenomenon and the phenomenon itself. Once we have a complete physical description of the system, we Camp 1-ites might bicker over exactly which bits of it correspond to ‘experience’ and ‘consciousness’, or perhaps claim that we have reductively dissolved such questions entirely; but we would agree that these are just arguments over definitions rather than pointing to anything actually left unexplained. I don’t think there is a Hard Problem.
I take Dennett’s view on p-zombies, i.e. they are not conceivable.
In the Camp 1 view, once you’ve explained the neural correlates, there is nothing left to explain; whether or not you have ‘explained the belief’ becomes an argument over definitions.