My externalism stuff is just intended to establish that Boltzmann brains and actual humans embedded in stable macroscopic worlds have different evidence available to them. At this point, I need make no claim about which of these is me. So I don’t think the anti-skeptical assumption plays a role here. My claim at this point is just that these two systems are in different epistemic situations (they have different beliefs, knowledge, evidence).
The rejection of skepticism is a separate assumption. As you say, there’s good pragmatic reason to reject skepticism. I’m not sure what you mean by “pragmatic reason”, but if you mean something like “We don’t actually know skepticism is false, but we have to operate under the assumption that it is” then I disagree. We do actually know there is an external world. To claim that we do not is to raise the standard of evidence to an artificially high level. Consistent sensory experience of an object in a variety of circumstances is ordinarily sufficient to claim that we know the object exists (despite the possibility that we may be in the Matrix).
So now we have two premises, both arrived at through different and independent chains of reasoning. The first is that subjective indistinguishability does not entail evidential indistinguishability. The second is that I am not a Boltzmann brain. The combination of these two premises leads to my conclusion, that one might be justified in excluding Boltzmann brains from one’s reference class. Now, a skeptic would attack the second premise. Fair enough, I guess. But realize that is a different premise from the first one. If your objection is skepticism, this objection has nothing to do with semantic externalism. And I think skepticism is a bad (and somewhat pointless) objection.
My claim at this point is just that these two systems are in different epistemic situations (they have different beliefs, knowledge, evidence).
That’s fine. But what matters is that they can’t actually tell they are in different epistemic situations. You’ve identified an objective distinction between Boltzmann brains and causally-embedded people. That difference is essentially: for the latter the external world exists, for the former it does not. But you haven’t provided anyway for a Boltzmann brain or a regular old-fashioned human being to infer anything different about the external world. You’re confusing yourself with word games. A Boltzman brain and a human being might be evidentially distinguishable in that the former’s intentional states don’t actually refer to anything. But their subjective situations are evidentially indistinguishable. Taboo ‘beliefs’ and ‘knowledge’. Their information states are identical. They will come to identical conclusions about everything. The Boltzmann brain copy of pragmatist is just as confident that he is not a Boltzmann brain as you are.
I disagree. We do actually know there is an external world. To claim that we do not is to raise the standard of evidence to an artificially high level. Consistent sensory experience of an object in a variety of circumstances is ordinarily sufficient to claim that we know the object exists (despite the possibility that we may be in the Matrix).
This statement is only true if you reject either the SSA or a cosmological model that predicts most things that are thinking the same thoughts I am are Boltzmann brains. Which is, like, the whole point of the argument and why it’s not actually a separate assumption. The Boltzmann brain idea, like the Simulation argument, is much stronger than typical Cartesian skepticism and they are in no way identical arguments. The former say that most of the things with your subjective experiences are Boltzmann brains/in a computer simulation. That’s very different from saying that there is a possibility an evil demon is tricking you. And the argument you give above for knowing that there is an external world is sufficient to rebut traditional, Cartesian skepticism but it is not sufficient to rebut the Boltzmann brain idea or the Simulation argument. These are more potent skepticisms.
Look at it this way: You have two premises that point to you being a Boltzmann brain. Your reply is that the SSA doesn’t actually suggest you are a Boltzmann brain because your intentional states have referents and the Boltzmann brain’s do not. That’s exactly what the Boltzmann brain copy of you is thinking. Meanwhile the cosmological model you’re working under says that just about everything thinking that thought is wrong.
They [Boltzmann brains and human beings] will come to identical conclusions about everything.
Your argument against my view seems to presume that my view is false. I deny that they will come to identical conclusions about everything. When I reason, I come to conclusions about things in my environment. For example, I came to the conclusion that Obama was born in Hawaii, based on evidence about Obama that was available to me. The Boltzmann brain cannot even refer to Obama, so it cannot come to this conclusion.
The Boltzmann brain copy of pragmatist is just as confident that he is not a Boltzmann brain as you are.
No. The Boltzmann brain copy of pragmatist doesn’t have any beliefs about Boltzmann brains (or brains in general) to be confident about. I know you disagree, but again, that disagreement is what’s at issue here. Restating the disagreement in different ways isn’t really an argument against my position.
This statement is only true if you reject either the SSA or a cosmological model that predicts most things that are thinking the same thoughts I am are Boltzmann brains.
The cosmological model doesn’t predict that there are many Boltzmann brains thinking the same thoughts as me. It predicts that there are many Boltzmann brains in the same brain state as me. Whether the SSA says that I am likely to be one of the Boltzmann brains depends on what the appropriate reference class is. There is good reason to think that the appropriate reference class includes all observers with sufficiently similar evidence as me. I don’t disagree with that version of the SSA. So far, no conflict between SSA + cosmology and the epistemology I’ve described.
What I disagree with is the claim that all subjectively similar observers have to be in the same reference class. The only motivation I can see for this is that subjective similarity entails evidential similarity. But I think there are strong arguments against this. These arguments do not assume anything about whether or not I am a Boltzmann brain. So I don’t see why the arguments I give have to be strong enough to rebut the idea that I’m a Boltzmann brain. That’s not what I’m trying to do. Maybe this comment gives a better idea of how I see the argument I’m responding to, and the nature of my response.
I agree with this claim, but I don’t see how it can be leveraged into the kind of objection I was responding to. Why should the fact that Boltzmann brains could go through an identical neural process convince me that the reasoning instantiated by me going through the neural process is wrong?
Why should the fact that Boltzmann brains could go through an identical neural process convince me that the reasoning instantiated by me going through the neural process is wrong?
The conclusions of this reasoning, when it’s performed by you, are not wrong, but they are wrong when the same reasoning is performed by a Boltzmann brain. In this sense, the process of reasoning is invalid, it doesn’t produce correct conclusions in all circumstances, and that makes it somewhat unsatisfactory, but of course it works well for the class of instantiations that doesn’t include Boltzmann brains.
As a less loaded model of some of the aspects of the problem, consider two atom-by-atom identical copies of a person who are given identical-looking closed boxes, with one box containing a red glove, and another a green glove. If the green-glove copy for some reason decides that the box it’s seeing contains a green glove, then that copy is right. At the same time, if the green-glove copy so decides, then since the copies are identical, the red-glove copy will also decide that its box contains a green glove, and it will be wrong. Since evidence about the content of the boxes is not available to the copies, deciding either way is in some sense incorrect reasoning, even if it happens to produce a correct belief in one of the reasoners, at the cost of producing an incorrect belief in the other.
OK, that’s a good example. Let’s say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones, which sends certain signals to the optic nerve and so on. In the case of the red-glove copy, a thermodynamic fluctuation occurs that leads it to go through the exact same physical process. That is, the fluctuation makes the cones react just as if they had interacted with green photons, and the downstream process is exactly the same. In this case, you’d want to say both duplicates have unjustified beliefs? The green-glove duplicate arrived at its belief through a reliable process, the red-glove duplicate didn’t. I just don’t see why our conclusion about the justification has to be the same across both copies. Even if I bought this constraint, I’d want to say that both of their beliefs are in fact justified. The red-glove one’s belief is false, but false beliefs can be justified. The red-glove copy just got really unlucky.
Let’s say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones
In my example, the gloves are not observed, the boxes are closed, the states of the brains of both copies and the nerve impulses they generate, and words they say will all be by construction identical during the thought experiment.
(See also the edit to the grandparent comment, it could be the case that we already agree.)
Whoops, missed that bit. Of course, if either copy is forming a judgment about the glove’s color without actual empirical contact with the glove, then its belief is unjustified. I don’t think the identity of the copies is relevant to our judgment in this case. What would you say about the example I gave, where the box is open and the green-glove copy actually sees the glove. By hypothesis, the brains of both copies remain physically identical throughout the process. In this case, do you think we should judge that there is something problematic about the green-glove copy’s judgment that the glove is green? This case seems far more analogous to a situation involving a human and a Boltzmann brain.
ETA: OK, I just saw the edit. We’re closer to agreement than I thought, but I still don’t get the “unsatisfactory” part. In the example I gave, I don’t think there’s anything unsatisfactory about the green-glove copy’s belief formation mechanism. It’s a paradigm example of forming a belief through a reliable process.
The sense in which your (correct) belief that you are not a Boltzmann brain is justified (or unjustified) seems to me analogous to the situation with the green-glove copy believing that its unobserved glove is green. Justification is a tricky thing: actually not being a Boltzmann brain, or actually being the green-glove copy could in some sense be said to justify the respective beliefs, without a need to rely on distinguishing evidence, but it’s not entirely clear to me how that works.
Your argument against my view seems to presume that my view is false. I deny that they will come to identical conclusions about everything. When I reason, I come to conclusions about things in my environment. For example, I came to the conclusion that Obama was born in Hawaii, based on evidence about Obama that was available to me. The Boltzmann brain cannot even refer to Obama, so it cannot come to this conclusion.
Do you deny that the Boltzmann brain thinks it can refer to Obama? I.e. that it has some mental representation of an external world that is indistinguishable from your own except insofar as it does not successfully refer to anything in an external world?
If your answer is “Yes, I deny it.” Then I don’t think you understand what it means to have identical brain states or your view presumes metaphysically spooky features that you haven’t unpacked. But what I understand your position to be is that you don’t deny it but that you think that the Boltzmann brain’s representation of the external world doesn’t can’t come to identical conclusions about the world because it’s representation doesn’t successfully refer to anything.
If you want to say that a belief must have a causal connection to the thing it is trying to refer to: fine. We can call what Boltzmann brains have “pseudo-beliefs”. Now, how can you tell if you have beliefs or pseudo-beliefs? You can’t. Claiming that the subjective situations of the human and the Boltzmann brain are evidentially distinguishable is totally bizarre when they themselves can’t make the distinction.
The reason people are focusing on the viability of the skeptical scenario in their responses to you is that it looks like the reason you think this is a viable evidential distinction is that you are unreasonably confident that your mental states successfully refer to an external world. Moreover, a solution to the argument that doesn’t reject the SSA or the cosmological model shouldn’t just play with the meaning of words—readers should react with a sense of “Oh, good. The external world exists after all.” If they don’t it’s a good indication that you haven’t really addressed the problem. This is true even though the argument starts by assuming that we are not Boltzmann brains since obviously the logical structure remains intact.
You should to stop assuming that everyone is misunderstanding you. Everyone is giving you the same criticism in different words and your argument is not being upvoted. Update on this information.
Do you deny that the Boltzmann brain thinks it can refer to Obama? I.e. that it has some mental representation of an external world that is indistinguishable from your own except insofar as it does not successfully refer to anything in an external world?
Yes. I don’t think the Boltzmann brain has a representation of the external world at all. Whether or not a system state is a representation of something else is not an intrinsic property of the state. It depends on how the state was produced and how it is used. If you disagree, could you articulate what it is you think makes a state representational?
I could use a salt and pepper shaker to represent cars when I’m at a restaurant telling my friend about a collision I recently experienced. Surely you’d agree that the particular arrangement of those shakers I constructed is not intrinsically representational. If they had ended up in that arrangement by chance they wouldn’t be representing anything. Why do you think neural arrangements are different?
Claiming that the subjective situations of the human and the Boltzmann brain are evidentially distinguishable is totally bizarre when they themselves can’t make the distinction.
I don’t think this is what I’m claiming. I’m not sure what you mean by the “subjective situations” of the human and the Boltzmann brain, but I don’t think I’m claiming that the subjective situations themselves are evidentially distinguishable. I don’t think I can point to some aspect of my phenomenal experience that proves I’m not a Boltzmann brain.
I’m claiming that the evidence available to me goes beyond my phenomenal experience, that one’s evidence isn’t fully determined by one’s “subjective situation”. Boltzmann brains and human beings have different evidence. Is this the view you regard as bizarre? It doesn’t seem all that different from the view expressed in the Sequences here.
Further, I think if two observers have vastly different sets of evidence, then it is permissibile to place them in separate reference classes when reasoning about certain anthropic problems.
You should to stop assuming that everyone is misunderstanding you.
I don’t assume this. I think some people are misunderstanding me. Others have expressed a position which I see as actually opposed to my position, so I’m pretty sure they have understood me. I think they’re wrong, though.
Your incorrect prediction about how I would respond to your question is an indication that you have at least partially misunderstood me. I suspect the number of people who have misunderstood me on this thread is explicable by a lack of clarity on my part.
Everyone is giving you the same criticism in different words and your argument is not being upvoted. Update on this information.
I have, but it does not shift my credence enough to convince me I’m wrong. Does the fact that a majority of philosophers express agreement with externalism about mental content lead you to update your position somewhat? If you are unconvinced that I am accurately representing externalism, I encourage you to read the SEP article I linked and make up your own mind.
Yes. I don’t think the Boltzmann brain has a representation of the external world at all.
Not “the” external world. “An external world” and when I say external world I don’t mean a different, actually existing external world but a symbolic system that purports to represent an external world just as a humans brain contains a symbolic system (or something like that) that actually represents the external world.
If they had ended up in that arrangement by chance they wouldn’t be representing anything. Why do you think neural arrangements are different?
The question here isn’t the relationship between the symbolic system/neural arrangement (call it S) and the system S purports to represent (call it R). The question is about the relation between S and the rest of the neural arrangement that produces phenomenal experience (call it P). If I understood exactly how that worked I would have solved the problem of consciousness. I have not done so. I’m okay with an externalism that simply says for S to count as an intentional state it must have a causal connection to R. But a position that a subject’s phenomenal experience supervenes not just on P and S but also on R is much more radical than typical externalism and would, absent a lot of explanation, imply that physicalism is false.
You’re not a phenomenal externalist, correct?
I’m claiming that the evidence available to me goes beyond my phenomenal experience, that one’s evidence isn’t fully determined by one’s “subjective situation”. Boltzmann brains and human beings have different evidence. Is this the view you regard as bizarre? It doesn’t seem all that different from the view expressed in the Sequences here.
Ah, this might be the issue of contention. What makes phenomenal experience evidence of anything is that we have good reason to think it is causally entangled in an external world. But a Boltzmann brain would have exactly the same reasons. That is, an external, persistent, physical world is the best explanation of our sensory experiences and so we take our sensory experiences to tell us things about that world which let’s us predict and manipulate it. But the Boltzmann brain has exactly the same sensory experiences (and memories). It will make the same predictions (regarding future sensory data) and run the same experiments (except the Boltzmann brain’s will be ‘imaginary’ in a sense) which will return the same results (in terms of sensory experiences).
I don’t really want this to be about the definition of evidence. But surely having different sets of evidence implies that a perfect Bayesian reasoner wouldn’t return the same updates and credences for both sets!
No, I’m not, you’ll be glad to hear. There are limits even to my lunacy. I was just objecting to your characterization of the BB’s brain state as a representation. I’m not even all that happy with calling it a purported representation. If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who’s doing the purporting? Is it sufficient that some system could be used as a representation for it to count as a purported representation? In that case, everything is a purported representation.
I think there’s a tendency to assume our mental representations somehow have intrinsic representational properties that we wouldn’t attribute to other external representations. This is probably because phenomenal representation seems so immediate. If a Boltzmann brain’s visual system were in the same state mine is in when I see my mother, then maybe the brain isn’t visually representing my mother, but surely it is representing a woman, or at least something. Well, no, I don’t think so. If a physical system that is atom-for-atom identical to a photograph of my mother congealed out of a high entropy soup it would not be a representation of my mother. It wouldn’t be a representation at all, and not even a purported one.
But surely having different sets of evidence implies that a perfect Bayesian reasoner wouldn’t return the same updates and credences for both sets!
First, the Boltzmann brain and I do not return the same updates. The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
Second, I disagree with your claim that perfect Bayesian reasoners would return different updates for different sets of evidence. I see no reason to believe this is true. As long as the likelihood ratios (and priors) are the same, the updates will be the same, but likelihood ratios aren’t unique to particular pieces of evidence. As an example, suppose a hypothesis H predicts a 30% chance of observing a piece of evidence E1, and the chance of observing that evidence if H had been false is 10%. It seems to me entirely possible that there is a totally different piece of evidence, E2, which H also predicts has a 30% chance of being observed, and ~H predicts has a 10% chance of being observed. A Bayesian reasoner who updated on E1 would return the same credence as one who updated on E2, even though E1 and E2 are different. None of this seems particularly controversial. Am I misunderstanding your claim?
Yes, but that’s my fault. Let’s put it this way. A set of evidence is indistinguishable from another set of evidence if and only if an ideal Bayesian reasoner can update on either and then, update not at all after learning the other set.
First, the Boltzmann brain and I do not return the same updates.
That’s not the issue. Neither you nor your Boltzmann brain copy is an ideal Bayesian reasoner. The question is: what happens when you feed your evidence to an ideal Bayesian reasoner and then feed the Boltzmann brain’s evidence. Will the ideal Bayesian reasoner find anything new to update on? What if you reverse the process and feed the Boltzmann brain’s evidence first? Will the ideal Bayesian reasoner update then?
The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
The pancomputation issue is tricky but has nothing to do with this. By stipulation Boltzmann brains are physically similar enough to humans to make computations that produce exactly similar brain states. Moreover, you say you are not a phenomenal externalist so the computations made by Boltzmann brains apparently produce exactly similar phenomenal experiences. Pancomputation isn’t any more of a problem for me than it is for you.
Perhaps this is just going to end up being a reductio on externalism.
Who’s doing the purporting?
The Boltzmann brain, obviously. Are you denying that a Boltzmann brain can have any intentional states. I.e. Can it believe things about it’s phenomenal experience, qualia, or other mental states. Can’t it believe it believes something?
If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who’s doing the purporting?
Well, the simpler part of this is that representation is a three-place predicate: system A represents system B to observer C1, which does not imply that A represents B to C2, nor does it prevent A from representing B2 to C2. (Nor, indeed, to C1.)
So, yes, a random salt-and-pepper-shaker arrangement might represent any number of things to any number of observers.
A purported representation is presumably some system A about which the claim is made (by anyone capable of making claims) that there exists a (B, C) pair such that A represents B to C.
But there’s a deeper disconnect here having to do with what it means for A to represent B to C in the first place, which we’ve discussed elsethread.
Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
Sure. And if I had a brain that could in fact treat all theoretically possible isomorphisms as salient at one time, I would indeed treat every physical system as performing every computation, and also as representing every other physical system. In fact, though, I lack such a brain; what my brain actually does is treat a vanishingly small fraction of theoretically possible isomorphisms as salient, and I am therefore restricted to only treating certain systems as performing certain computations and as representing certain other systems.
My externalism stuff is just intended to establish that Boltzmann brains and actual humans embedded in stable macroscopic worlds have different evidence available to them. At this point, I need make no claim about which of these is me. So I don’t think the anti-skeptical assumption plays a role here. My claim at this point is just that these two systems are in different epistemic situations (they have different beliefs, knowledge, evidence).
The rejection of skepticism is a separate assumption. As you say, there’s good pragmatic reason to reject skepticism. I’m not sure what you mean by “pragmatic reason”, but if you mean something like “We don’t actually know skepticism is false, but we have to operate under the assumption that it is” then I disagree. We do actually know there is an external world. To claim that we do not is to raise the standard of evidence to an artificially high level. Consistent sensory experience of an object in a variety of circumstances is ordinarily sufficient to claim that we know the object exists (despite the possibility that we may be in the Matrix).
So now we have two premises, both arrived at through different and independent chains of reasoning. The first is that subjective indistinguishability does not entail evidential indistinguishability. The second is that I am not a Boltzmann brain. The combination of these two premises leads to my conclusion, that one might be justified in excluding Boltzmann brains from one’s reference class. Now, a skeptic would attack the second premise. Fair enough, I guess. But realize that is a different premise from the first one. If your objection is skepticism, this objection has nothing to do with semantic externalism. And I think skepticism is a bad (and somewhat pointless) objection.
That’s fine. But what matters is that they can’t actually tell they are in different epistemic situations. You’ve identified an objective distinction between Boltzmann brains and causally-embedded people. That difference is essentially: for the latter the external world exists, for the former it does not. But you haven’t provided anyway for a Boltzmann brain or a regular old-fashioned human being to infer anything different about the external world. You’re confusing yourself with word games. A Boltzman brain and a human being might be evidentially distinguishable in that the former’s intentional states don’t actually refer to anything. But their subjective situations are evidentially indistinguishable. Taboo ‘beliefs’ and ‘knowledge’. Their information states are identical. They will come to identical conclusions about everything. The Boltzmann brain copy of pragmatist is just as confident that he is not a Boltzmann brain as you are.
This statement is only true if you reject either the SSA or a cosmological model that predicts most things that are thinking the same thoughts I am are Boltzmann brains. Which is, like, the whole point of the argument and why it’s not actually a separate assumption. The Boltzmann brain idea, like the Simulation argument, is much stronger than typical Cartesian skepticism and they are in no way identical arguments. The former say that most of the things with your subjective experiences are Boltzmann brains/in a computer simulation. That’s very different from saying that there is a possibility an evil demon is tricking you. And the argument you give above for knowing that there is an external world is sufficient to rebut traditional, Cartesian skepticism but it is not sufficient to rebut the Boltzmann brain idea or the Simulation argument. These are more potent skepticisms.
Look at it this way: You have two premises that point to you being a Boltzmann brain. Your reply is that the SSA doesn’t actually suggest you are a Boltzmann brain because your intentional states have referents and the Boltzmann brain’s do not. That’s exactly what the Boltzmann brain copy of you is thinking. Meanwhile the cosmological model you’re working under says that just about everything thinking that thought is wrong.
Your argument against my view seems to presume that my view is false. I deny that they will come to identical conclusions about everything. When I reason, I come to conclusions about things in my environment. For example, I came to the conclusion that Obama was born in Hawaii, based on evidence about Obama that was available to me. The Boltzmann brain cannot even refer to Obama, so it cannot come to this conclusion.
No. The Boltzmann brain copy of pragmatist doesn’t have any beliefs about Boltzmann brains (or brains in general) to be confident about. I know you disagree, but again, that disagreement is what’s at issue here. Restating the disagreement in different ways isn’t really an argument against my position.
The cosmological model doesn’t predict that there are many Boltzmann brains thinking the same thoughts as me. It predicts that there are many Boltzmann brains in the same brain state as me. Whether the SSA says that I am likely to be one of the Boltzmann brains depends on what the appropriate reference class is. There is good reason to think that the appropriate reference class includes all observers with sufficiently similar evidence as me. I don’t disagree with that version of the SSA. So far, no conflict between SSA + cosmology and the epistemology I’ve described.
What I disagree with is the claim that all subjectively similar observers have to be in the same reference class. The only motivation I can see for this is that subjective similarity entails evidential similarity. But I think there are strong arguments against this. These arguments do not assume anything about whether or not I am a Boltzmann brain. So I don’t see why the arguments I give have to be strong enough to rebut the idea that I’m a Boltzmann brain. That’s not what I’m trying to do. Maybe this comment gives a better idea of how I see the argument I’m responding to, and the nature of my response.
The claim is merely that they will produce identical subsequent brain states and identical nerve impulses.
I agree with this claim, but I don’t see how it can be leveraged into the kind of objection I was responding to. Why should the fact that Boltzmann brains could go through an identical neural process convince me that the reasoning instantiated by me going through the neural process is wrong?
The conclusions of this reasoning, when it’s performed by you, are not wrong, but they are wrong when the same reasoning is performed by a Boltzmann brain. In this sense, the process of reasoning is invalid, it doesn’t produce correct conclusions in all circumstances, and that makes it somewhat unsatisfactory, but of course it works well for the class of instantiations that doesn’t include Boltzmann brains.
As a less loaded model of some of the aspects of the problem, consider two atom-by-atom identical copies of a person who are given identical-looking closed boxes, with one box containing a red glove, and another a green glove. If the green-glove copy for some reason decides that the box it’s seeing contains a green glove, then that copy is right. At the same time, if the green-glove copy so decides, then since the copies are identical, the red-glove copy will also decide that its box contains a green glove, and it will be wrong. Since evidence about the content of the boxes is not available to the copies, deciding either way is in some sense incorrect reasoning, even if it happens to produce a correct belief in one of the reasoners, at the cost of producing an incorrect belief in the other.
OK, that’s a good example. Let’s say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones, which sends certain signals to the optic nerve and so on. In the case of the red-glove copy, a thermodynamic fluctuation occurs that leads it to go through the exact same physical process. That is, the fluctuation makes the cones react just as if they had interacted with green photons, and the downstream process is exactly the same. In this case, you’d want to say both duplicates have unjustified beliefs? The green-glove duplicate arrived at its belief through a reliable process, the red-glove duplicate didn’t. I just don’t see why our conclusion about the justification has to be the same across both copies. Even if I bought this constraint, I’d want to say that both of their beliefs are in fact justified. The red-glove one’s belief is false, but false beliefs can be justified. The red-glove copy just got really unlucky.
In my example, the gloves are not observed, the boxes are closed, the states of the brains of both copies and the nerve impulses they generate, and words they say will all be by construction identical during the thought experiment.
(See also the edit to the grandparent comment, it could be the case that we already agree.)
Whoops, missed that bit. Of course, if either copy is forming a judgment about the glove’s color without actual empirical contact with the glove, then its belief is unjustified. I don’t think the identity of the copies is relevant to our judgment in this case. What would you say about the example I gave, where the box is open and the green-glove copy actually sees the glove. By hypothesis, the brains of both copies remain physically identical throughout the process. In this case, do you think we should judge that there is something problematic about the green-glove copy’s judgment that the glove is green? This case seems far more analogous to a situation involving a human and a Boltzmann brain.
ETA: OK, I just saw the edit. We’re closer to agreement than I thought, but I still don’t get the “unsatisfactory” part. In the example I gave, I don’t think there’s anything unsatisfactory about the green-glove copy’s belief formation mechanism. It’s a paradigm example of forming a belief through a reliable process.
The sense in which your (correct) belief that you are not a Boltzmann brain is justified (or unjustified) seems to me analogous to the situation with the green-glove copy believing that its unobserved glove is green. Justification is a tricky thing: actually not being a Boltzmann brain, or actually being the green-glove copy could in some sense be said to justify the respective beliefs, without a need to rely on distinguishing evidence, but it’s not entirely clear to me how that works.
Do you deny that the Boltzmann brain thinks it can refer to Obama? I.e. that it has some mental representation of an external world that is indistinguishable from your own except insofar as it does not successfully refer to anything in an external world?
If your answer is “Yes, I deny it.” Then I don’t think you understand what it means to have identical brain states or your view presumes metaphysically spooky features that you haven’t unpacked. But what I understand your position to be is that you don’t deny it but that you think that the Boltzmann brain’s representation of the external world doesn’t can’t come to identical conclusions about the world because it’s representation doesn’t successfully refer to anything.
If you want to say that a belief must have a causal connection to the thing it is trying to refer to: fine. We can call what Boltzmann brains have “pseudo-beliefs”. Now, how can you tell if you have beliefs or pseudo-beliefs? You can’t. Claiming that the subjective situations of the human and the Boltzmann brain are evidentially distinguishable is totally bizarre when they themselves can’t make the distinction.
The reason people are focusing on the viability of the skeptical scenario in their responses to you is that it looks like the reason you think this is a viable evidential distinction is that you are unreasonably confident that your mental states successfully refer to an external world. Moreover, a solution to the argument that doesn’t reject the SSA or the cosmological model shouldn’t just play with the meaning of words—readers should react with a sense of “Oh, good. The external world exists after all.” If they don’t it’s a good indication that you haven’t really addressed the problem. This is true even though the argument starts by assuming that we are not Boltzmann brains since obviously the logical structure remains intact.
You should to stop assuming that everyone is misunderstanding you. Everyone is giving you the same criticism in different words and your argument is not being upvoted. Update on this information.
Yes. I don’t think the Boltzmann brain has a representation of the external world at all. Whether or not a system state is a representation of something else is not an intrinsic property of the state. It depends on how the state was produced and how it is used. If you disagree, could you articulate what it is you think makes a state representational?
I could use a salt and pepper shaker to represent cars when I’m at a restaurant telling my friend about a collision I recently experienced. Surely you’d agree that the particular arrangement of those shakers I constructed is not intrinsically representational. If they had ended up in that arrangement by chance they wouldn’t be representing anything. Why do you think neural arrangements are different?
I don’t think this is what I’m claiming. I’m not sure what you mean by the “subjective situations” of the human and the Boltzmann brain, but I don’t think I’m claiming that the subjective situations themselves are evidentially distinguishable. I don’t think I can point to some aspect of my phenomenal experience that proves I’m not a Boltzmann brain.
I’m claiming that the evidence available to me goes beyond my phenomenal experience, that one’s evidence isn’t fully determined by one’s “subjective situation”. Boltzmann brains and human beings have different evidence. Is this the view you regard as bizarre? It doesn’t seem all that different from the view expressed in the Sequences here.
Further, I think if two observers have vastly different sets of evidence, then it is permissibile to place them in separate reference classes when reasoning about certain anthropic problems.
I don’t assume this. I think some people are misunderstanding me. Others have expressed a position which I see as actually opposed to my position, so I’m pretty sure they have understood me. I think they’re wrong, though.
Your incorrect prediction about how I would respond to your question is an indication that you have at least partially misunderstood me. I suspect the number of people who have misunderstood me on this thread is explicable by a lack of clarity on my part.
I have, but it does not shift my credence enough to convince me I’m wrong. Does the fact that a majority of philosophers express agreement with externalism about mental content lead you to update your position somewhat? If you are unconvinced that I am accurately representing externalism, I encourage you to read the SEP article I linked and make up your own mind.
Not “the” external world. “An external world” and when I say external world I don’t mean a different, actually existing external world but a symbolic system that purports to represent an external world just as a humans brain contains a symbolic system (or something like that) that actually represents the external world.
The question here isn’t the relationship between the symbolic system/neural arrangement (call it S) and the system S purports to represent (call it R). The question is about the relation between S and the rest of the neural arrangement that produces phenomenal experience (call it P). If I understood exactly how that worked I would have solved the problem of consciousness. I have not done so. I’m okay with an externalism that simply says for S to count as an intentional state it must have a causal connection to R. But a position that a subject’s phenomenal experience supervenes not just on P and S but also on R is much more radical than typical externalism and would, absent a lot of explanation, imply that physicalism is false.
You’re not a phenomenal externalist, correct?
Ah, this might be the issue of contention. What makes phenomenal experience evidence of anything is that we have good reason to think it is causally entangled in an external world. But a Boltzmann brain would have exactly the same reasons. That is, an external, persistent, physical world is the best explanation of our sensory experiences and so we take our sensory experiences to tell us things about that world which let’s us predict and manipulate it. But the Boltzmann brain has exactly the same sensory experiences (and memories). It will make the same predictions (regarding future sensory data) and run the same experiments (except the Boltzmann brain’s will be ‘imaginary’ in a sense) which will return the same results (in terms of sensory experiences).
I don’t really want this to be about the definition of evidence. But surely having different sets of evidence implies that a perfect Bayesian reasoner wouldn’t return the same updates and credences for both sets!
No, I’m not, you’ll be glad to hear. There are limits even to my lunacy. I was just objecting to your characterization of the BB’s brain state as a representation. I’m not even all that happy with calling it a purported representation. If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who’s doing the purporting? Is it sufficient that some system could be used as a representation for it to count as a purported representation? In that case, everything is a purported representation.
I think there’s a tendency to assume our mental representations somehow have intrinsic representational properties that we wouldn’t attribute to other external representations. This is probably because phenomenal representation seems so immediate. If a Boltzmann brain’s visual system were in the same state mine is in when I see my mother, then maybe the brain isn’t visually representing my mother, but surely it is representing a woman, or at least something. Well, no, I don’t think so. If a physical system that is atom-for-atom identical to a photograph of my mother congealed out of a high entropy soup it would not be a representation of my mother. It wouldn’t be a representation at all, and not even a purported one.
First, the Boltzmann brain and I do not return the same updates. The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
Second, I disagree with your claim that perfect Bayesian reasoners would return different updates for different sets of evidence. I see no reason to believe this is true. As long as the likelihood ratios (and priors) are the same, the updates will be the same, but likelihood ratios aren’t unique to particular pieces of evidence. As an example, suppose a hypothesis H predicts a 30% chance of observing a piece of evidence E1, and the chance of observing that evidence if H had been false is 10%. It seems to me entirely possible that there is a totally different piece of evidence, E2, which H also predicts has a 30% chance of being observed, and ~H predicts has a 10% chance of being observed. A Bayesian reasoner who updated on E1 would return the same credence as one who updated on E2, even though E1 and E2 are different. None of this seems particularly controversial. Am I misunderstanding your claim?
Yes, but that’s my fault. Let’s put it this way. A set of evidence is indistinguishable from another set of evidence if and only if an ideal Bayesian reasoner can update on either and then, update not at all after learning the other set.
That’s not the issue. Neither you nor your Boltzmann brain copy is an ideal Bayesian reasoner. The question is: what happens when you feed your evidence to an ideal Bayesian reasoner and then feed the Boltzmann brain’s evidence. Will the ideal Bayesian reasoner find anything new to update on? What if you reverse the process and feed the Boltzmann brain’s evidence first? Will the ideal Bayesian reasoner update then?
The pancomputation issue is tricky but has nothing to do with this. By stipulation Boltzmann brains are physically similar enough to humans to make computations that produce exactly similar brain states. Moreover, you say you are not a phenomenal externalist so the computations made by Boltzmann brains apparently produce exactly similar phenomenal experiences. Pancomputation isn’t any more of a problem for me than it is for you.
Perhaps this is just going to end up being a reductio on externalism.
The Boltzmann brain, obviously. Are you denying that a Boltzmann brain can have any intentional states. I.e. Can it believe things about it’s phenomenal experience, qualia, or other mental states. Can’t it believe it believes something?
Well, the simpler part of this is that representation is a three-place predicate: system A represents system B to observer C1, which does not imply that A represents B to C2, nor does it prevent A from representing B2 to C2. (Nor, indeed, to C1.)
So, yes, a random salt-and-pepper-shaker arrangement might represent any number of things to any number of observers.
A purported representation is presumably some system A about which the claim is made (by anyone capable of making claims) that there exists a (B, C) pair such that A represents B to C.
But there’s a deeper disconnect here having to do with what it means for A to represent B to C in the first place, which we’ve discussed elsethread.
Sure. And if I had a brain that could in fact treat all theoretically possible isomorphisms as salient at one time, I would indeed treat every physical system as performing every computation, and also as representing every other physical system. In fact, though, I lack such a brain; what my brain actually does is treat a vanishingly small fraction of theoretically possible isomorphisms as salient, and I am therefore restricted to only treating certain systems as performing certain computations and as representing certain other systems.