From just reading the definition of the Mary’s Room problem my knee-jerk reaction was “this seems plausible.” It is a textbook example of how an algorithm feels from the inside.
You might know everything there is to know intellectually about colours, but that does not induce sensations in your visual cortex. Humans don’t work that way.
A bayesian AI might go “that is about what I expected” when you switch it from a black and white camera to a colour one, solely on the basis of the production parameters of the camera, a physics paper on optics and perhaps a single colour photo.
You might know everything there is to know intellectually about colours, but that does not induce sensations in your visual cortex. Humans don’t work that way.
(nods) This is likely true, but a lot of work is being done by the word “intellectually”. If the conclusion of the thought experiment is that there is information to be obtained by experience that is not captured by whatever we’re calling “intellectual” knowledge, or more generally that there’s information in-principle-extractable from an event that isn’t ordinarily extracted by particular cognitive systems, that’s not really all that remarkable.
Using this thought experiment the way it is traditionally used requires a bit of sleight of hand, wherein we are encouraged to apply our intuitions about the kinds of knowledge our brains extract to all knowledge.
that there is information to be obtained by experience that is not captured by whatever we’re calling “intellectual” knowledge, or more generally that there’s information in-principle-extractable from an event that isn’t ordinarily extracted by particular cognitive systems, that’s not really all that remarkable.
it is not remarkable from many perspectives, but it is contradictory to some forms of physicalism, which argue that everything is understandable by the methods of physical science, ie from the outside.
Can you clarify the contradiction? It seems to me there could easily be information in-principle-extractable from an event, that isn’t ordinarily extracted by particular cognitive systems, that is understandable by the methods of physical science.
I mean, I certainly agree that if we posit the existence of information that is not in-principle understandable by the methods of physical science but is ordinarily extracted by particular cognitive systems, then that does contradict the forms of physicalism you’re talking about here.
My point, though, is that the scenario described in Mary’s Room does not give us any reason to believe that such information exists.
Several people on this site have responded to M’s R with the claim that (in effect) such information does exist. The claim is usually expressed on the lines of an individual needing to be in a brain state, to personally instantiate it, in order to understand it. For instance: “You might know everything there is to know intellectually about colours, but that does not induce sensations in your visual cortex”.
So, to repeat myself, I agree that if such information exists, then the forms of physicalism you’re talking about here are false.
And I agree that several people have responded to Mary’s Room with the claim that such information does exist, and I don’t deny that they’ve done so, nor have I even denied that they’ve done so.
What I do deny is that Mary’s Room demonstrates that such information exists, or that they are justified in believing anything different after being exposed to MR than before.
MR just invites me to generalize from my limited experience of knowing some things about color to a hypothesized state of intellectually knowing everything there is to know about color, and anticipates that I will ignorantly imagine keeping other aspects of my limited experience fixed. If I instead ignorantly imagine other aspects of my limited experience varying, it completely fails to demonstrate what it’s claimed to demonstrate.
For example, if I start out believing that sensations of color are in-principle unavailable to a healthy human brain solely by virtue of being in that hypothesized state, then MR might feel like a compelling demonstration of that claim. “Oh look, there’s Mary,” I might say, “and I know she’s never had such sensations, so clearly seeing a yellow banana is new information to her, therefore...etc. etc. etc.”.
Conversely, if I don’t believe that to start with, it might not. “Oh look,” I might say, “there’s Mary, who knows everything there is to know about color, and has probably therefore had vivid dreams of seeing color as her brain has made various connections with that information, so clearly seeing a yellow banana is not new information to her, therefore etc. etc.”
There might be good reasons to reject that second intuition and embrace the first, or vice-versa, but thinking about Mary’s Room is not a good reason to do either. It’s just a question-begging invitation to visualize my preconceptions and treat them as confirming data.
All of that said, I certainly agree that there exist experiences which depend on certain classes of brain states to instantiate them, and that in the absence of those brain states those experiences are not possible.
What I do deny is that Mary’s Room demonstrates that such information exists, or that they are justified in believing anything different after being exposed to MR than before.
No, it doens’t demonstrate it like a mathematical proof. It isn’t intended to work that way. it is supposed to be an intuition pump.
Conversely, if I don’t believe that to start with, it might not. “Oh look,” I might say, “there’s Mary, who knows everything there is to know about color, and has probably therefore had vivid dreams of seeing color as her brain has made various connections with that information, so clearly seeing a yellow banana is not new information to her, therefore etc. etc.”
To have dreams of colour is to be in the brain state. So you are not saying Mary would have the information of what yellow looks like without ever having been in a seeing-yellow state. These kind of loophole-finding objections are rather pointless because you can always add riders to thought experiment to block them: Marys skin has been bleached,s he has been given driugs to prevent dreaming, etc.
There might be good reasons to reject that second intuition and embrace the first, or vice-versa,
If we have reasons for an intuition, it isn’t an intuition.
All of that said, I certainly agree that there exist experiences which depend on certain classes of brain states to instantiate them, and that in the absence of those brain states those experiences are not possible.
But that isn;t relevant. What is relevant is whether personally instantiating a state is necessary to understand something.
If we’re agreed about the nature of Mary’s Room, great.
I decline to get into a discussion of how thought experiments are supposed to work, but I certainly agree with you that they aren’t supposed to be mathematical proofs.
I also decline to get into yet another discussion about the nature of conscious experience.
If we’re agreed about the nature of Mary’s Room, great.
Agreed on what about Mary’s room? I don’t agree that there a “right” and “wrong” intuitions about it, and I am not a fan of “M’s R is bad because all thought experiments are bad”.
Agreed that Mary’s Room doesn’t demonstrate that information that is not in-principle understandable by the methods of physical science but is ordinarily extracted by particular cognitive systems exists; that it’s solely intended as an intuition pump, as you say.
I certainly don’t believe that all thought experiments are bad, but again, I decline to get into a discussion of how thought experiments are supposed to work.
I’m surprised by your patient discussion with Peterdjones. My experience was that he is impossible to get through to, so I gave up a long time ago. Have you had any success?
I’m not quite sure what success looks like. Mostly, I’ve been trying to clarify my initial point about Mary’s Room, which we may have made minor progress on.
From just reading the definition of the Mary’s Room problem my knee-jerk reaction was “this seems plausible.” It is a textbook example of how an algorithm feels from the inside.
You might know everything there is to know intellectually about colours, but that does not induce sensations in your visual cortex. Humans don’t work that way.
A bayesian AI might go “that is about what I expected” when you switch it from a black and white camera to a colour one, solely on the basis of the production parameters of the camera, a physics paper on optics and perhaps a single colour photo.
(nods) This is likely true, but a lot of work is being done by the word “intellectually”. If the conclusion of the thought experiment is that there is information to be obtained by experience that is not captured by whatever we’re calling “intellectual” knowledge, or more generally that there’s information in-principle-extractable from an event that isn’t ordinarily extracted by particular cognitive systems, that’s not really all that remarkable.
Using this thought experiment the way it is traditionally used requires a bit of sleight of hand, wherein we are encouraged to apply our intuitions about the kinds of knowledge our brains extract to all knowledge.
it is not remarkable from many perspectives, but it is contradictory to some forms of physicalism, which argue that everything is understandable by the methods of physical science, ie from the outside.
Can you clarify the contradiction? It seems to me there could easily be information in-principle-extractable from an event, that isn’t ordinarily extracted by particular cognitive systems, that is understandable by the methods of physical science.
I mean, I certainly agree that if we posit the existence of information that is not in-principle understandable by the methods of physical science but is ordinarily extracted by particular cognitive systems, then that does contradict the forms of physicalism you’re talking about here.
My point, though, is that the scenario described in Mary’s Room does not give us any reason to believe that such information exists.
Several people on this site have responded to M’s R with the claim that (in effect) such information does exist. The claim is usually expressed on the lines of an individual needing to be in a brain state, to personally instantiate it, in order to understand it. For instance: “You might know everything there is to know intellectually about colours, but that does not induce sensations in your visual cortex”.
So, to repeat myself, I agree that if such information exists, then the forms of physicalism you’re talking about here are false.
And I agree that several people have responded to Mary’s Room with the claim that such information does exist, and I don’t deny that they’ve done so, nor have I even denied that they’ve done so.
What I do deny is that Mary’s Room demonstrates that such information exists, or that they are justified in believing anything different after being exposed to MR than before.
MR just invites me to generalize from my limited experience of knowing some things about color to a hypothesized state of intellectually knowing everything there is to know about color, and anticipates that I will ignorantly imagine keeping other aspects of my limited experience fixed. If I instead ignorantly imagine other aspects of my limited experience varying, it completely fails to demonstrate what it’s claimed to demonstrate.
For example, if I start out believing that sensations of color are in-principle unavailable to a healthy human brain solely by virtue of being in that hypothesized state, then MR might feel like a compelling demonstration of that claim. “Oh look, there’s Mary,” I might say, “and I know she’s never had such sensations, so clearly seeing a yellow banana is new information to her, therefore...etc. etc. etc.”.
Conversely, if I don’t believe that to start with, it might not. “Oh look,” I might say, “there’s Mary, who knows everything there is to know about color, and has probably therefore had vivid dreams of seeing color as her brain has made various connections with that information, so clearly seeing a yellow banana is not new information to her, therefore etc. etc.”
There might be good reasons to reject that second intuition and embrace the first, or vice-versa, but thinking about Mary’s Room is not a good reason to do either. It’s just a question-begging invitation to visualize my preconceptions and treat them as confirming data.
All of that said, I certainly agree that there exist experiences which depend on certain classes of brain states to instantiate them, and that in the absence of those brain states those experiences are not possible.
No, it doens’t demonstrate it like a mathematical proof. It isn’t intended to work that way. it is supposed to be an intuition pump.
To have dreams of colour is to be in the brain state. So you are not saying Mary would have the information of what yellow looks like without ever having been in a seeing-yellow state. These kind of loophole-finding objections are rather pointless because you can always add riders to thought experiment to block them: Marys skin has been bleached,s he has been given driugs to prevent dreaming, etc.
If we have reasons for an intuition, it isn’t an intuition.
But that isn;t relevant. What is relevant is whether personally instantiating a state is necessary to understand something.
If we’re agreed about the nature of Mary’s Room, great.
I decline to get into a discussion of how thought experiments are supposed to work, but I certainly agree with you that they aren’t supposed to be mathematical proofs.
I also decline to get into yet another discussion about the nature of conscious experience.
Agreed on what about Mary’s room? I don’t agree that there a “right” and “wrong” intuitions about it, and I am not a fan of “M’s R is bad because all thought experiments are bad”.
Agreed that Mary’s Room doesn’t demonstrate that information that is not in-principle understandable by the methods of physical science but is ordinarily extracted by particular cognitive systems exists; that it’s solely intended as an intuition pump, as you say.
I certainly don’t believe that all thought experiments are bad, but again, I decline to get into a discussion of how thought experiments are supposed to work.
I’m surprised by your patient discussion with Peterdjones. My experience was that he is impossible to get through to, so I gave up a long time ago. Have you had any success?
I’m not quite sure what success looks like.
Mostly, I’ve been trying to clarify my initial point about Mary’s Room, which we may have made minor progress on.
You mean I remained unconvinced by your claim that reality isn’t real?
Its “nature”.
Mary’s Room is:
A thought experiment.
Supposed to be an intuition pump.
Not a formal proof of anything.
Possible conditional extension:
Of usefulness dependent upon the relevance of its premises, the things it seeks to make you think about, and the reliability of human intuition.