I’m still a bit confused by what you mean by ‘mental states’. My best guess is that you are using it as a catch-all term for everything that is or might be going on in the brain, which includes experiences, beliefs, thoughts, and more general states like ‘psychotic’ or ‘relaxed’.
I would not count “psychotic” here, since one is not necessarily directly acquainted with it (one doesn’t necessarily know one has it).
I don’t think an experience can be propositional. I don’t understand this relates to whether these particular mental states are able to be explained.
I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences, or that they at least more similar to utterances than to experiences. I responded that aboutness (technical term: intentionality) doesn’t matter, as several things that are commonly regarded as qualia, just like experiences, can be about something, e.g. loves or fears. So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1.
I think the main disagreement is actually just one, the above: What counts as a simple explanandum such that we would not run into hard explanatory problems? My position is that only utterances act as such a simple explanandum, and that no subjective mental state (things we are directly acquainted with, like intentional states, emotions and experiences) is simple in this sense, since they are not obviously compatible with any causal explanation.
I would not count “psychotic” here, since one is not necessarily directly acquainted with it (one doesn’t necessarily know one has it).
Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’?
I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences
I don’t think there is any connection between whether a thought/belief/experience is about something and whether it is explainable. I’m not sure about ‘easier to explain’, but it doesn’t seem like the degree of easiness is a key issue here. I hold the vanilla Camp 1 view that everything the brain is doing is ultimately and completely explainable in physical terms.
or that they at least more similar to utterances than to experiences
I do think beliefs are more similar to utterances than experiences. If we were to draw an ontology of ‘things brains do’, utterances would probably be a closer sibling to thoughts than to beliefs, and perhaps a distant cousin to experiences. A thought can be propositional (‘the sky is blue’) or non-propositional (‘oh no!’), as can an utterance, but a belief is only propositional, while an experience is never propositional. I think an utterance could be reasonably characterised as a thought that is not content to stay swimming around in the brain but for whatever reason escapes out through the mouth. To be clear though, I don’t think any of this maps on to the question of whether these phenomena are explicable in terms of the physical implementation details of the brain.
So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1.
I think there is an in-principle difference between Camp 1 ‘accepting beliefs [or utterances] as explanandum’ and Camp 2 ‘accepting experiences as explanandum’. When you ask ‘What counts as a simple explanandum such that we would not run into hard explanatory problems?’, I think the disagreement between Camp 1 and Camp 2 in answering this question is not over ‘where the explanandum is’ so much as ‘what it would mean to explain it’.
It might help here to unpack the phrase ‘accepting beliefs as explanandum’ from the Camp 1 viewpoint. In a way this is a shorthand for ‘requiring a complete explanation of how the brain as a physical system goes from some starting state to the state of having the belief’. The belief or utterance as explanandum works as a shorthand for this for the reasons I mentioned above, i.e. that any explanation that does not account for how the brain ended up having this belief or generating this utterance is not a complete and satisfactory explanation. This doesn’t privilege either beliefs or utterances as special categories of things to be explained; they just happen to be end states that capture everything we think is worth explaining about something like ‘having a headache’ in particular circumstances like ‘forming a belief that I have a headache’ or ‘uttering the sentence “I have a headache”’.
By analogy, suppose that I was an air safety investigator investigating an incident in which the rudder of a passenger jet went into a sudden hardover. The most appropriate explanandum in this case is ‘the rudder going into a sudden hardover’, because any explanation that doesn’t end with ‘...and this causes the rudder to go into a sudden hardover’ is clearly unsatisfactory for my purposes. Suppose I then conduct a test flight in which the aircraft’s autopilot is disconnected from the rudder, and discover a set of conditions that reliably causes the autopilot to form an incorrect model of the state of the aircraft, such that if the autopilot was connected to the rudder it would command a sudden hardover to ‘correct’ the situation. It seems quite reasonable in this case for the explanandum to be ‘the autopilot forming an incorrect model of the state of the aircraft’. There is no conceptual difference in the type of explanation required in the two cases. They can both in principle be explained in terms of a physical chain of events, which in both cases would almost certainly include some sequence of computations inside the autopilot. The fact that the explanandum in the second case is a propositional representation internal to the autopilot rather than a physical movement of a rudder doesn’t pose any new conceptual mysteries. We’re just using the explanandum to define the scope of what we’re interested in explaining.
This is distinct from the Camp 2 view, in which even if you had a complete description of the physical steps involved in forming the belief or utterance ‘I have a headache’, there would still be something left to explain, that is the subjective character of the experience of having a headache. When the Camp 2 view says that the experience itself is the explanandum, it does privilege subjective experience as a special category of things to be explained. This view asserts that experience has a property of subjectiveness that in our current understanding cannot be explained in terms of the physical steps, and it is this property of subjectiveness itself that demands a satisfactory explanation. When Camp 2 point to experience as explanandum, they’re not saying ‘it would be useful and satisfying to have an explanation of the physical sequence of events that lead up to this state’; they’re saying ‘there is something going on here that we don’t even know how to explain in terms of a physical sequence of events’. Quoting the original post, in this view ‘even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding.’
Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’?
Yeah, aware of, or conscious of. Psychosis seems to be less a mental state in this sense than a disposition to produce certain mental states.
Suppose I then conduct a test flight in which the aircraft’s autopilot is disconnected from the rudder, and discover a set of conditions that reliably causes the autopilot to form an incorrect model of the state of the aircraft, such that if the autopilot was connected to the rudder it would command a sudden hardover to ‘correct’ the situation. It seems quite reasonable in this case for the explanandum to be ‘the autopilot forming an incorrect model of the state of the aircraft’.
What you call “model” here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn’t provide a difference between the two. Explaining the neural correlate is of course just as “easy” as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn’t explain the belief/experience in question in terms of this correlate. It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person. So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn’t explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.
Apologies for the repetition, but I’m going to start by restating a slightly updated model of what I think is going on, because it provides the context for the rest of my comment. Basically I still think there are two elements to our disagreement:
The Camp 1 vs Camp 2 disagreement. Camp 1 thinks that a description of the physical system would completely and satisfactorily explain the nature of consciousness and subjective experience; Camp 2 thinks that there is a conceptual element of subjective experience that we don’t currently know how to explain in physical terms, even in principle. Camp 2 thinks there is a capital-H Hard Problem of consciousness, the ‘conceptual mystery’ in Rafael’s post; Camp1 does not. I am in Camp 1, and as best I can tell you are in Camp 2.
You think that all(?) ‘mental states’ pose this conceptual Hard Problem, including intentional phenomena like thoughts and beliefs as well as more ‘purely subjective’ phenomena like experiences. My impression is that this is a mildly unorthodox position within Camp 2, although as I mentioned in my original comment I’ve never really understood e.g. what Nagel was trying to say about the relationship between mental phenomena being only directly accessible to a single mind and them being Hard to explain, so I might be entirely wrong about this. In any case, because I don’t believe that there is a conceptual mystery in the first place, the question of (e.g.) whether the explanandum is an utterance vs a belief means something very different to me than it does to you. When I talk about locating the explanandum at utterances vs beliefs, I’m talking about the scope of the physical system to be explained. When you talk about it, you’re talking about the location(s) of the conceptual mystery.
What you call “model” here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn’t provide a difference between the two. Explaining the neural correlate is of course just as “easy” as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn’t explain the belief/experience in question in terms of this correlate.
As a Camp 1 person, I don’t think that there is any (non-semantic) difference between the observable neurological correlates of a belief or any other mental phenomenon and the phenomenon itself. Once we have a complete physical description of the system, we Camp 1-ites might bicker over exactly which bits of it correspond to ‘experience’ and ‘consciousness’, or perhaps claim that we have reductively dissolved such questions entirely; but we would agree that these are just arguments over definitions rather than pointing to anything actually left unexplained. I don’t think there is a Hard Problem.
It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person.
I take Dennett’s view on p-zombies, i.e. they are not conceivable.
So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn’t explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.
In the Camp 1 view, once you’ve explained the neural correlates, there is nothing left to explain; whether or not you have ‘explained the belief’ becomes an argument over definitions.
I would not count “psychotic” here, since one is not necessarily directly acquainted with it (one doesn’t necessarily know one has it).
I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences, or that they at least more similar to utterances than to experiences. I responded that aboutness (technical term: intentionality) doesn’t matter, as several things that are commonly regarded as qualia, just like experiences, can be about something, e.g. loves or fears. So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1.
I think the main disagreement is actually just one, the above: What counts as a simple explanandum such that we would not run into hard explanatory problems? My position is that only utterances act as such a simple explanandum, and that no subjective mental state (things we are directly acquainted with, like intentional states, emotions and experiences) is simple in this sense, since they are not obviously compatible with any causal explanation.
Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’?
I don’t think there is any connection between whether a thought/belief/experience is about something and whether it is explainable. I’m not sure about ‘easier to explain’, but it doesn’t seem like the degree of easiness is a key issue here. I hold the vanilla Camp 1 view that everything the brain is doing is ultimately and completely explainable in physical terms.
I do think beliefs are more similar to utterances than experiences. If we were to draw an ontology of ‘things brains do’, utterances would probably be a closer sibling to thoughts than to beliefs, and perhaps a distant cousin to experiences. A thought can be propositional (‘the sky is blue’) or non-propositional (‘oh no!’), as can an utterance, but a belief is only propositional, while an experience is never propositional. I think an utterance could be reasonably characterised as a thought that is not content to stay swimming around in the brain but for whatever reason escapes out through the mouth. To be clear though, I don’t think any of this maps on to the question of whether these phenomena are explicable in terms of the physical implementation details of the brain.
I think there is an in-principle difference between Camp 1 ‘accepting beliefs [or utterances] as explanandum’ and Camp 2 ‘accepting experiences as explanandum’. When you ask ‘What counts as a simple explanandum such that we would not run into hard explanatory problems?’, I think the disagreement between Camp 1 and Camp 2 in answering this question is not over ‘where the explanandum is’ so much as ‘what it would mean to explain it’.
It might help here to unpack the phrase ‘accepting beliefs as explanandum’ from the Camp 1 viewpoint. In a way this is a shorthand for ‘requiring a complete explanation of how the brain as a physical system goes from some starting state to the state of having the belief’. The belief or utterance as explanandum works as a shorthand for this for the reasons I mentioned above, i.e. that any explanation that does not account for how the brain ended up having this belief or generating this utterance is not a complete and satisfactory explanation. This doesn’t privilege either beliefs or utterances as special categories of things to be explained; they just happen to be end states that capture everything we think is worth explaining about something like ‘having a headache’ in particular circumstances like ‘forming a belief that I have a headache’ or ‘uttering the sentence “I have a headache”’.
By analogy, suppose that I was an air safety investigator investigating an incident in which the rudder of a passenger jet went into a sudden hardover. The most appropriate explanandum in this case is ‘the rudder going into a sudden hardover’, because any explanation that doesn’t end with ‘...and this causes the rudder to go into a sudden hardover’ is clearly unsatisfactory for my purposes. Suppose I then conduct a test flight in which the aircraft’s autopilot is disconnected from the rudder, and discover a set of conditions that reliably causes the autopilot to form an incorrect model of the state of the aircraft, such that if the autopilot was connected to the rudder it would command a sudden hardover to ‘correct’ the situation. It seems quite reasonable in this case for the explanandum to be ‘the autopilot forming an incorrect model of the state of the aircraft’. There is no conceptual difference in the type of explanation required in the two cases. They can both in principle be explained in terms of a physical chain of events, which in both cases would almost certainly include some sequence of computations inside the autopilot. The fact that the explanandum in the second case is a propositional representation internal to the autopilot rather than a physical movement of a rudder doesn’t pose any new conceptual mysteries. We’re just using the explanandum to define the scope of what we’re interested in explaining.
This is distinct from the Camp 2 view, in which even if you had a complete description of the physical steps involved in forming the belief or utterance ‘I have a headache’, there would still be something left to explain, that is the subjective character of the experience of having a headache. When the Camp 2 view says that the experience itself is the explanandum, it does privilege subjective experience as a special category of things to be explained. This view asserts that experience has a property of subjectiveness that in our current understanding cannot be explained in terms of the physical steps, and it is this property of subjectiveness itself that demands a satisfactory explanation. When Camp 2 point to experience as explanandum, they’re not saying ‘it would be useful and satisfying to have an explanation of the physical sequence of events that lead up to this state’; they’re saying ‘there is something going on here that we don’t even know how to explain in terms of a physical sequence of events’. Quoting the original post, in this view ‘even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding.’
Yeah, aware of, or conscious of. Psychosis seems to be less a mental state in this sense than a disposition to produce certain mental states.
What you call “model” here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn’t provide a difference between the two. Explaining the neural correlate is of course just as “easy” as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn’t explain the belief/experience in question in terms of this correlate. It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person. So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn’t explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.
Apologies for the repetition, but I’m going to start by restating a slightly updated model of what I think is going on, because it provides the context for the rest of my comment. Basically I still think there are two elements to our disagreement:
The Camp 1 vs Camp 2 disagreement. Camp 1 thinks that a description of the physical system would completely and satisfactorily explain the nature of consciousness and subjective experience; Camp 2 thinks that there is a conceptual element of subjective experience that we don’t currently know how to explain in physical terms, even in principle. Camp 2 thinks there is a capital-H Hard Problem of consciousness, the ‘conceptual mystery’ in Rafael’s post; Camp1 does not. I am in Camp 1, and as best I can tell you are in Camp 2.
You think that all(?) ‘mental states’ pose this conceptual Hard Problem, including intentional phenomena like thoughts and beliefs as well as more ‘purely subjective’ phenomena like experiences. My impression is that this is a mildly unorthodox position within Camp 2, although as I mentioned in my original comment I’ve never really understood e.g. what Nagel was trying to say about the relationship between mental phenomena being only directly accessible to a single mind and them being Hard to explain, so I might be entirely wrong about this. In any case, because I don’t believe that there is a conceptual mystery in the first place, the question of (e.g.) whether the explanandum is an utterance vs a belief means something very different to me than it does to you. When I talk about locating the explanandum at utterances vs beliefs, I’m talking about the scope of the physical system to be explained. When you talk about it, you’re talking about the location(s) of the conceptual mystery.
As a Camp 1 person, I don’t think that there is any (non-semantic) difference between the observable neurological correlates of a belief or any other mental phenomenon and the phenomenon itself. Once we have a complete physical description of the system, we Camp 1-ites might bicker over exactly which bits of it correspond to ‘experience’ and ‘consciousness’, or perhaps claim that we have reductively dissolved such questions entirely; but we would agree that these are just arguments over definitions rather than pointing to anything actually left unexplained. I don’t think there is a Hard Problem.
I take Dennett’s view on p-zombies, i.e. they are not conceivable.
In the Camp 1 view, once you’ve explained the neural correlates, there is nothing left to explain; whether or not you have ‘explained the belief’ becomes an argument over definitions.