Do you deny that the Boltzmann brain thinks it can refer to Obama? I.e. that it has some mental representation of an external world that is indistinguishable from your own except insofar as it does not successfully refer to anything in an external world?
Yes. I don’t think the Boltzmann brain has a representation of the external world at all. Whether or not a system state is a representation of something else is not an intrinsic property of the state. It depends on how the state was produced and how it is used. If you disagree, could you articulate what it is you think makes a state representational?
I could use a salt and pepper shaker to represent cars when I’m at a restaurant telling my friend about a collision I recently experienced. Surely you’d agree that the particular arrangement of those shakers I constructed is not intrinsically representational. If they had ended up in that arrangement by chance they wouldn’t be representing anything. Why do you think neural arrangements are different?
Claiming that the subjective situations of the human and the Boltzmann brain are evidentially distinguishable is totally bizarre when they themselves can’t make the distinction.
I don’t think this is what I’m claiming. I’m not sure what you mean by the “subjective situations” of the human and the Boltzmann brain, but I don’t think I’m claiming that the subjective situations themselves are evidentially distinguishable. I don’t think I can point to some aspect of my phenomenal experience that proves I’m not a Boltzmann brain.
I’m claiming that the evidence available to me goes beyond my phenomenal experience, that one’s evidence isn’t fully determined by one’s “subjective situation”. Boltzmann brains and human beings have different evidence. Is this the view you regard as bizarre? It doesn’t seem all that different from the view expressed in the Sequences here.
Further, I think if two observers have vastly different sets of evidence, then it is permissibile to place them in separate reference classes when reasoning about certain anthropic problems.
You should to stop assuming that everyone is misunderstanding you.
I don’t assume this. I think some people are misunderstanding me. Others have expressed a position which I see as actually opposed to my position, so I’m pretty sure they have understood me. I think they’re wrong, though.
Your incorrect prediction about how I would respond to your question is an indication that you have at least partially misunderstood me. I suspect the number of people who have misunderstood me on this thread is explicable by a lack of clarity on my part.
Everyone is giving you the same criticism in different words and your argument is not being upvoted. Update on this information.
I have, but it does not shift my credence enough to convince me I’m wrong. Does the fact that a majority of philosophers express agreement with externalism about mental content lead you to update your position somewhat? If you are unconvinced that I am accurately representing externalism, I encourage you to read the SEP article I linked and make up your own mind.
Yes. I don’t think the Boltzmann brain has a representation of the external world at all.
Not “the” external world. “An external world” and when I say external world I don’t mean a different, actually existing external world but a symbolic system that purports to represent an external world just as a humans brain contains a symbolic system (or something like that) that actually represents the external world.
If they had ended up in that arrangement by chance they wouldn’t be representing anything. Why do you think neural arrangements are different?
The question here isn’t the relationship between the symbolic system/neural arrangement (call it S) and the system S purports to represent (call it R). The question is about the relation between S and the rest of the neural arrangement that produces phenomenal experience (call it P). If I understood exactly how that worked I would have solved the problem of consciousness. I have not done so. I’m okay with an externalism that simply says for S to count as an intentional state it must have a causal connection to R. But a position that a subject’s phenomenal experience supervenes not just on P and S but also on R is much more radical than typical externalism and would, absent a lot of explanation, imply that physicalism is false.
You’re not a phenomenal externalist, correct?
I’m claiming that the evidence available to me goes beyond my phenomenal experience, that one’s evidence isn’t fully determined by one’s “subjective situation”. Boltzmann brains and human beings have different evidence. Is this the view you regard as bizarre? It doesn’t seem all that different from the view expressed in the Sequences here.
Ah, this might be the issue of contention. What makes phenomenal experience evidence of anything is that we have good reason to think it is causally entangled in an external world. But a Boltzmann brain would have exactly the same reasons. That is, an external, persistent, physical world is the best explanation of our sensory experiences and so we take our sensory experiences to tell us things about that world which let’s us predict and manipulate it. But the Boltzmann brain has exactly the same sensory experiences (and memories). It will make the same predictions (regarding future sensory data) and run the same experiments (except the Boltzmann brain’s will be ‘imaginary’ in a sense) which will return the same results (in terms of sensory experiences).
I don’t really want this to be about the definition of evidence. But surely having different sets of evidence implies that a perfect Bayesian reasoner wouldn’t return the same updates and credences for both sets!
No, I’m not, you’ll be glad to hear. There are limits even to my lunacy. I was just objecting to your characterization of the BB’s brain state as a representation. I’m not even all that happy with calling it a purported representation. If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who’s doing the purporting? Is it sufficient that some system could be used as a representation for it to count as a purported representation? In that case, everything is a purported representation.
I think there’s a tendency to assume our mental representations somehow have intrinsic representational properties that we wouldn’t attribute to other external representations. This is probably because phenomenal representation seems so immediate. If a Boltzmann brain’s visual system were in the same state mine is in when I see my mother, then maybe the brain isn’t visually representing my mother, but surely it is representing a woman, or at least something. Well, no, I don’t think so. If a physical system that is atom-for-atom identical to a photograph of my mother congealed out of a high entropy soup it would not be a representation of my mother. It wouldn’t be a representation at all, and not even a purported one.
But surely having different sets of evidence implies that a perfect Bayesian reasoner wouldn’t return the same updates and credences for both sets!
First, the Boltzmann brain and I do not return the same updates. The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
Second, I disagree with your claim that perfect Bayesian reasoners would return different updates for different sets of evidence. I see no reason to believe this is true. As long as the likelihood ratios (and priors) are the same, the updates will be the same, but likelihood ratios aren’t unique to particular pieces of evidence. As an example, suppose a hypothesis H predicts a 30% chance of observing a piece of evidence E1, and the chance of observing that evidence if H had been false is 10%. It seems to me entirely possible that there is a totally different piece of evidence, E2, which H also predicts has a 30% chance of being observed, and ~H predicts has a 10% chance of being observed. A Bayesian reasoner who updated on E1 would return the same credence as one who updated on E2, even though E1 and E2 are different. None of this seems particularly controversial. Am I misunderstanding your claim?
Yes, but that’s my fault. Let’s put it this way. A set of evidence is indistinguishable from another set of evidence if and only if an ideal Bayesian reasoner can update on either and then, update not at all after learning the other set.
First, the Boltzmann brain and I do not return the same updates.
That’s not the issue. Neither you nor your Boltzmann brain copy is an ideal Bayesian reasoner. The question is: what happens when you feed your evidence to an ideal Bayesian reasoner and then feed the Boltzmann brain’s evidence. Will the ideal Bayesian reasoner find anything new to update on? What if you reverse the process and feed the Boltzmann brain’s evidence first? Will the ideal Bayesian reasoner update then?
The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
The pancomputation issue is tricky but has nothing to do with this. By stipulation Boltzmann brains are physically similar enough to humans to make computations that produce exactly similar brain states. Moreover, you say you are not a phenomenal externalist so the computations made by Boltzmann brains apparently produce exactly similar phenomenal experiences. Pancomputation isn’t any more of a problem for me than it is for you.
Perhaps this is just going to end up being a reductio on externalism.
Who’s doing the purporting?
The Boltzmann brain, obviously. Are you denying that a Boltzmann brain can have any intentional states. I.e. Can it believe things about it’s phenomenal experience, qualia, or other mental states. Can’t it believe it believes something?
If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who’s doing the purporting?
Well, the simpler part of this is that representation is a three-place predicate: system A represents system B to observer C1, which does not imply that A represents B to C2, nor does it prevent A from representing B2 to C2. (Nor, indeed, to C1.)
So, yes, a random salt-and-pepper-shaker arrangement might represent any number of things to any number of observers.
A purported representation is presumably some system A about which the claim is made (by anyone capable of making claims) that there exists a (B, C) pair such that A represents B to C.
But there’s a deeper disconnect here having to do with what it means for A to represent B to C in the first place, which we’ve discussed elsethread.
Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
Sure. And if I had a brain that could in fact treat all theoretically possible isomorphisms as salient at one time, I would indeed treat every physical system as performing every computation, and also as representing every other physical system. In fact, though, I lack such a brain; what my brain actually does is treat a vanishingly small fraction of theoretically possible isomorphisms as salient, and I am therefore restricted to only treating certain systems as performing certain computations and as representing certain other systems.
Yes. I don’t think the Boltzmann brain has a representation of the external world at all. Whether or not a system state is a representation of something else is not an intrinsic property of the state. It depends on how the state was produced and how it is used. If you disagree, could you articulate what it is you think makes a state representational?
I could use a salt and pepper shaker to represent cars when I’m at a restaurant telling my friend about a collision I recently experienced. Surely you’d agree that the particular arrangement of those shakers I constructed is not intrinsically representational. If they had ended up in that arrangement by chance they wouldn’t be representing anything. Why do you think neural arrangements are different?
I don’t think this is what I’m claiming. I’m not sure what you mean by the “subjective situations” of the human and the Boltzmann brain, but I don’t think I’m claiming that the subjective situations themselves are evidentially distinguishable. I don’t think I can point to some aspect of my phenomenal experience that proves I’m not a Boltzmann brain.
I’m claiming that the evidence available to me goes beyond my phenomenal experience, that one’s evidence isn’t fully determined by one’s “subjective situation”. Boltzmann brains and human beings have different evidence. Is this the view you regard as bizarre? It doesn’t seem all that different from the view expressed in the Sequences here.
Further, I think if two observers have vastly different sets of evidence, then it is permissibile to place them in separate reference classes when reasoning about certain anthropic problems.
I don’t assume this. I think some people are misunderstanding me. Others have expressed a position which I see as actually opposed to my position, so I’m pretty sure they have understood me. I think they’re wrong, though.
Your incorrect prediction about how I would respond to your question is an indication that you have at least partially misunderstood me. I suspect the number of people who have misunderstood me on this thread is explicable by a lack of clarity on my part.
I have, but it does not shift my credence enough to convince me I’m wrong. Does the fact that a majority of philosophers express agreement with externalism about mental content lead you to update your position somewhat? If you are unconvinced that I am accurately representing externalism, I encourage you to read the SEP article I linked and make up your own mind.
Not “the” external world. “An external world” and when I say external world I don’t mean a different, actually existing external world but a symbolic system that purports to represent an external world just as a humans brain contains a symbolic system (or something like that) that actually represents the external world.
The question here isn’t the relationship between the symbolic system/neural arrangement (call it S) and the system S purports to represent (call it R). The question is about the relation between S and the rest of the neural arrangement that produces phenomenal experience (call it P). If I understood exactly how that worked I would have solved the problem of consciousness. I have not done so. I’m okay with an externalism that simply says for S to count as an intentional state it must have a causal connection to R. But a position that a subject’s phenomenal experience supervenes not just on P and S but also on R is much more radical than typical externalism and would, absent a lot of explanation, imply that physicalism is false.
You’re not a phenomenal externalist, correct?
Ah, this might be the issue of contention. What makes phenomenal experience evidence of anything is that we have good reason to think it is causally entangled in an external world. But a Boltzmann brain would have exactly the same reasons. That is, an external, persistent, physical world is the best explanation of our sensory experiences and so we take our sensory experiences to tell us things about that world which let’s us predict and manipulate it. But the Boltzmann brain has exactly the same sensory experiences (and memories). It will make the same predictions (regarding future sensory data) and run the same experiments (except the Boltzmann brain’s will be ‘imaginary’ in a sense) which will return the same results (in terms of sensory experiences).
I don’t really want this to be about the definition of evidence. But surely having different sets of evidence implies that a perfect Bayesian reasoner wouldn’t return the same updates and credences for both sets!
No, I’m not, you’ll be glad to hear. There are limits even to my lunacy. I was just objecting to your characterization of the BB’s brain state as a representation. I’m not even all that happy with calling it a purported representation. If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who’s doing the purporting? Is it sufficient that some system could be used as a representation for it to count as a purported representation? In that case, everything is a purported representation.
I think there’s a tendency to assume our mental representations somehow have intrinsic representational properties that we wouldn’t attribute to other external representations. This is probably because phenomenal representation seems so immediate. If a Boltzmann brain’s visual system were in the same state mine is in when I see my mother, then maybe the brain isn’t visually representing my mother, but surely it is representing a woman, or at least something. Well, no, I don’t think so. If a physical system that is atom-for-atom identical to a photograph of my mother congealed out of a high entropy soup it would not be a representation of my mother. It wouldn’t be a representation at all, and not even a purported one.
First, the Boltzmann brain and I do not return the same updates. The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
Second, I disagree with your claim that perfect Bayesian reasoners would return different updates for different sets of evidence. I see no reason to believe this is true. As long as the likelihood ratios (and priors) are the same, the updates will be the same, but likelihood ratios aren’t unique to particular pieces of evidence. As an example, suppose a hypothesis H predicts a 30% chance of observing a piece of evidence E1, and the chance of observing that evidence if H had been false is 10%. It seems to me entirely possible that there is a totally different piece of evidence, E2, which H also predicts has a 30% chance of being observed, and ~H predicts has a 10% chance of being observed. A Bayesian reasoner who updated on E1 would return the same credence as one who updated on E2, even though E1 and E2 are different. None of this seems particularly controversial. Am I misunderstanding your claim?
Yes, but that’s my fault. Let’s put it this way. A set of evidence is indistinguishable from another set of evidence if and only if an ideal Bayesian reasoner can update on either and then, update not at all after learning the other set.
That’s not the issue. Neither you nor your Boltzmann brain copy is an ideal Bayesian reasoner. The question is: what happens when you feed your evidence to an ideal Bayesian reasoner and then feed the Boltzmann brain’s evidence. Will the ideal Bayesian reasoner find anything new to update on? What if you reverse the process and feed the Boltzmann brain’s evidence first? Will the ideal Bayesian reasoner update then?
The pancomputation issue is tricky but has nothing to do with this. By stipulation Boltzmann brains are physically similar enough to humans to make computations that produce exactly similar brain states. Moreover, you say you are not a phenomenal externalist so the computations made by Boltzmann brains apparently produce exactly similar phenomenal experiences. Pancomputation isn’t any more of a problem for me than it is for you.
Perhaps this is just going to end up being a reductio on externalism.
The Boltzmann brain, obviously. Are you denying that a Boltzmann brain can have any intentional states. I.e. Can it believe things about it’s phenomenal experience, qualia, or other mental states. Can’t it believe it believes something?
Well, the simpler part of this is that representation is a three-place predicate: system A represents system B to observer C1, which does not imply that A represents B to C2, nor does it prevent A from representing B2 to C2. (Nor, indeed, to C1.)
So, yes, a random salt-and-pepper-shaker arrangement might represent any number of things to any number of observers.
A purported representation is presumably some system A about which the claim is made (by anyone capable of making claims) that there exists a (B, C) pair such that A represents B to C.
But there’s a deeper disconnect here having to do with what it means for A to represent B to C in the first place, which we’ve discussed elsethread.
Sure. And if I had a brain that could in fact treat all theoretically possible isomorphisms as salient at one time, I would indeed treat every physical system as performing every computation, and also as representing every other physical system. In fact, though, I lack such a brain; what my brain actually does is treat a vanishingly small fraction of theoretically possible isomorphisms as salient, and I am therefore restricted to only treating certain systems as performing certain computations and as representing certain other systems.