This is a good point. But on the other hand, we can be very confident that there are algorithms that exhibit behavior that we would explain, in ourselves, as a consequence of feeling things, and there are “parallel explanations” of the algorithm’s behavior and the feelings-based explanations we would normally tell about ourselves
And they can conceivably do all that without feelings. The flip side of not being able to explain why an algorithm should feel like anything on the inside is that zombies are conceivable.
Another hint at this correspondence is that we can make models of humans themselves as if their feelings are due to the mechanistic behavior of neurons, make predictions and plans using that model, and then try them out, and as far as we can tell the model makes successful predictions about what I will feel
Models in which mental states figure also make successful predictions … you can predict ouches from pains. The physical map is not uniquely predictive.
am I absolutely committed to Cartesian dualism,
Cartesian dualism is not the only alternative to physicalism.
And they can conceivably do all that without feelings.
Sure, if we mean “conceivable” in the same way that “561 is prime” and “557 is prime” are both conceivable. That is, conceivable in a way that allows for internal contradictions, so long as we haven’t figured out where the internal contradictions are yet.
“am I absolutely committed to Cartesian dualism,”
Cartesian dualism is not the only alternative to physicalism.
True, but it’s a very convenient central example of a priori dualism, which has no space in its framework for any evidence (either from sensations of the external world or phenomena in general) that it’s actually being implemented on a physical substrate.
That is, conceivable in a way that allows for internal contradictions, so long as we haven’t figured out where the internal contradictions are yet.
You seem to be saying that an algorithm is necessarily conscious , only we don’t know how or why , so there is no contradiction for us,no internal contradiction, in imagining an unconscious algorithm.
That’s quite a strange thing to say. How do we know that consciousness is necssitated when w don’t understand it? Is it necessitated by all algorithms that report consciousness? Do we know that it depends solely on the abstract algorithm and not the substrate?
“Dualism wrong” contains little information, and therefore tells you little about th features of non-dualism
One thing I am saying is that I think there’s a very strong parallel between not knowing how one could show if a computer program is conscious, and not having any idea how one could change their mind about dualism in response to evidence.
True, but it’s a very convenient central example of a priori dualism, which has no space in its framework for any evidence (either from sensations of the external world or phenomena in general) that it’s actually being implemented on a physical substrate.
You seem to be using “a priori” to mean something like “dogmatic and incapable of being updated”. But apriori doesn’t mean that, and contemporary dualists are capable of saying what they need to change their minds: a reductive explanation of consciousness.
Them merely saying they’ll be convinced by a “reductive explanation” is too circular for my tastes. It’s like me saying “You could convince me the moon was made of green cheese if you gave me a convincing argument for it.” It’s not false, but it doesn’t actually make any advance commitments about what such an argument might look like.
If someone says they’re open to being persuaded “in principle,” but has absolutely no idea what evidence could sway them, then my bet is that any such persuasion will have nothing to do with science, little to do with logic, and a lot to do with psychology.
That’s not an analogous analogy, because reductive explanations have an agreed set of features.
It’s odd to portray reductive explanation as this uselessly mysterious thing, when it is the basis of reductionism, which is an obligatory belief around here.
I’m not sure if we’re using “reductive explanation” the same way then, because if we associate it with the closest thing I think is agreed upon around here, I don’t feel like dualists would agree that such a thing truly works.
What I’m thinking of is explanation based on a correspondence between two different models of our experience. Example: I can explain heat by the motion of atoms by showing that atomic theory predicts very similar phenomena to the intuitive model that led to me giving “heat” a special property-label. This is considered progress because atomic theory also makes a lot of other good predictions, without much complexity.
These models include bridging laws (e.g. when the atoms in the nerves in your skin move fast, you feel heat). Equivalently, they can be thought of as purely models of our phenomena that merely happen to include the physical world to the extent that it’s useful. This is “common sense” on LW because of how much we like Solomonoff induction, but isn’t necessarily common sense among materialist scientists, let alone dualists.
These inferred bridging laws can do pretty neat things. Even though they at first would seem to only work for you (there being no need to model the phenomena of other minds if we’re already modeling the atoms), we can still ask what phenomena “you” would experience if “you” were someone else, or even if you were a bat. At first it might seem like the bridging laws should be specific to the exact configuration of your brain and give total nonsense if applied to someone else, but if they’re truly as simple as possible then we would expect them to generalize for the same sorts of reasons we expect other simple patterns in our observations to generalize.
Anyhow, that’s what I would think of as a reductive explanation of consciousness—rules that parsimoniously explain our experiences by reference to a physical world. But there’s a very resonant sense in which it will feel like there’s still an open question of why those bridging laws, and maybe that we haven’t shown that the experiences are truly identical to the physical patterns rather than merely being associated with them. (Note that all this applies equally well to our explanation of heat.)
“Look,” says the imaginary dualist, “you have a simple explanation of the world here, but you’ve actually shown that dualism is right! You have one part of this explanation that involves the world, and another part that involves the experiences. But nowhere does this big model of our experiences say that the experiences are made of the same stuff as the world. You haven’t really explained how consciousness arises from patterns of matter, you’ve just categorized what patterns of matter we expect to see when we’re in various conscious states.”
Now, if the dualists were hardcore committed to Occam’s razor, maybe they would come around. But somehow I don’t associate dualists with the phrase “hardcore committed to Occam’s razor.” The central issue is that a mere simple model isn’t always a good explanation by human standards—it doesn’t actually put in the explanatory work necessary to break the problem into human-understandable pieces or resolve our confusions. It’s just probably right. A classic example is “Why do mirrors flip left and right but not up and down?” Maxwell’s equations are a terrible explanation of this.
If the bridging laws , that explain how and why mental states arise from physical states, are left unspecified , then the complexity of the explanation cannot be assessed , so Occam’s razor doesn’t kick in. To put it another way, Occam’s razor applies to explanations, so you need to get over the bar of being merely explanatory.
What you call being hardcore about Occam’s razor seems to mean believing in the simplest possible ((something)) ,where ((something)) doesn’t have to be an explanation.
A classic example is “Why do mirrors flip left and right but not up and down?” Maxwell’s equations are a terrible explanation of this.
Maxwell’s equations are a bad intuitive explanation of reflection flipping, but you can’t deny that the intuitive explanation is implicit in Maxwell’s equations, because the alternative is that it is a physics-defying miracle.
The central issue is that a mere simple model isn’t always a good explanation by human standards—it doesn’t actually put in the explanatory work necessary to break the problem into human-understandable pieces or resolve our confusions.
What’s the equivalent of Maxwell’s equations in the mind body problem?
These inferred bridging laws can do pretty neat things. Even though they at first would seem to only work for you (there being no need to model the phenomena of other minds if we’re already modeling the atoms), we can still ask what phenomena “you” would experience if “you” were someone else, or even if you were a bat
We can ask, but as far as I know there is no answer. I have never heard of a set of laws that allow novel subjective experience to be predicted from brain states. But are your “inferred” and “would” meant to imply that they don’t?
And they can conceivably do all that without feelings. The flip side of not being able to explain why an algorithm should feel like anything on the inside is that zombies are conceivable.
Models in which mental states figure also make successful predictions … you can predict ouches from pains. The physical map is not uniquely predictive.
Cartesian dualism is not the only alternative to physicalism.
Sure, if we mean “conceivable” in the same way that “561 is prime” and “557 is prime” are both conceivable. That is, conceivable in a way that allows for internal contradictions, so long as we haven’t figured out where the internal contradictions are yet.
True, but it’s a very convenient central example of a priori dualism, which has no space in its framework for any evidence (either from sensations of the external world or phenomena in general) that it’s actually being implemented on a physical substrate.
You seem to be saying that an algorithm is necessarily conscious , only we don’t know how or why , so there is no contradiction for us,no internal contradiction, in imagining an unconscious algorithm.
That’s quite a strange thing to say. How do we know that consciousness is necssitated when w don’t understand it? Is it necessitated by all algorithms that report consciousness? Do we know that it depends solely on the abstract algorithm and not the substrate?
“Dualism wrong” contains little information, and therefore tells you little about th features of non-dualism
Hm, no, I don’t think you got what I meant.
One thing I am saying is that I think there’s a very strong parallel between not knowing how one could show if a computer program is conscious, and not having any idea how one could change their mind about dualism in response to evidence.
You seem to be using “a priori” to mean something like “dogmatic and incapable of being updated”. But apriori doesn’t mean that, and contemporary dualists are capable of saying what they need to change their minds: a reductive explanation of consciousness.
Them merely saying they’ll be convinced by a “reductive explanation” is too circular for my tastes. It’s like me saying “You could convince me the moon was made of green cheese if you gave me a convincing argument for it.” It’s not false, but it doesn’t actually make any advance commitments about what such an argument might look like.
If someone says they’re open to being persuaded “in principle,” but has absolutely no idea what evidence could sway them, then my bet is that any such persuasion will have nothing to do with science, little to do with logic, and a lot to do with psychology.
That’s not an analogous analogy, because reductive explanations have an agreed set of features.
It’s odd to portray reductive explanation as this uselessly mysterious thing, when it is the basis of reductionism, which is an obligatory belief around here.
I’m not sure if we’re using “reductive explanation” the same way then, because if we associate it with the closest thing I think is agreed upon around here, I don’t feel like dualists would agree that such a thing truly works.
What I’m thinking of is explanation based on a correspondence between two different models of our experience. Example: I can explain heat by the motion of atoms by showing that atomic theory predicts very similar phenomena to the intuitive model that led to me giving “heat” a special property-label. This is considered progress because atomic theory also makes a lot of other good predictions, without much complexity.
These models include bridging laws (e.g. when the atoms in the nerves in your skin move fast, you feel heat). Equivalently, they can be thought of as purely models of our phenomena that merely happen to include the physical world to the extent that it’s useful. This is “common sense” on LW because of how much we like Solomonoff induction, but isn’t necessarily common sense among materialist scientists, let alone dualists.
These inferred bridging laws can do pretty neat things. Even though they at first would seem to only work for you (there being no need to model the phenomena of other minds if we’re already modeling the atoms), we can still ask what phenomena “you” would experience if “you” were someone else, or even if you were a bat. At first it might seem like the bridging laws should be specific to the exact configuration of your brain and give total nonsense if applied to someone else, but if they’re truly as simple as possible then we would expect them to generalize for the same sorts of reasons we expect other simple patterns in our observations to generalize.
Anyhow, that’s what I would think of as a reductive explanation of consciousness—rules that parsimoniously explain our experiences by reference to a physical world. But there’s a very resonant sense in which it will feel like there’s still an open question of why those bridging laws, and maybe that we haven’t shown that the experiences are truly identical to the physical patterns rather than merely being associated with them. (Note that all this applies equally well to our explanation of heat.)
“Look,” says the imaginary dualist, “you have a simple explanation of the world here, but you’ve actually shown that dualism is right! You have one part of this explanation that involves the world, and another part that involves the experiences. But nowhere does this big model of our experiences say that the experiences are made of the same stuff as the world. You haven’t really explained how consciousness arises from patterns of matter, you’ve just categorized what patterns of matter we expect to see when we’re in various conscious states.”
Now, if the dualists were hardcore committed to Occam’s razor, maybe they would come around. But somehow I don’t associate dualists with the phrase “hardcore committed to Occam’s razor.” The central issue is that a mere simple model isn’t always a good explanation by human standards—it doesn’t actually put in the explanatory work necessary to break the problem into human-understandable pieces or resolve our confusions. It’s just probably right. A classic example is “Why do mirrors flip left and right but not up and down?” Maxwell’s equations are a terrible explanation of this.
If the bridging laws , that explain how and why mental states arise from physical states, are left unspecified , then the complexity of the explanation cannot be assessed , so Occam’s razor doesn’t kick in. To put it another way, Occam’s razor applies to explanations, so you need to get over the bar of being merely explanatory.
What you call being hardcore about Occam’s razor seems to mean believing in the simplest possible ((something)) ,where ((something)) doesn’t have to be an explanation.
Maxwell’s equations are a bad intuitive explanation of reflection flipping, but you can’t deny that the intuitive explanation is implicit in Maxwell’s equations, because the alternative is that it is a physics-defying miracle.
What’s the equivalent of Maxwell’s equations in the mind body problem?
We can ask, but as far as I know there is no answer. I have never heard of a set of laws that allow novel subjective experience to be predicted from brain states. But are your “inferred” and “would” meant to imply that they don’t?