But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain’t just in the head.
Whether or not a Boltzman brain could successfully refer to Barack Obama doesn’t change the fact that your Boltzman brain copy doesn’t know it can’t have beliefs about Barack Obama. It’s a scenario of radical skepticism. We can deny that Boltzman brains have knowledge but they don’t know any better.
your Boltzman brain copy doesn’t know it can’t have beliefs about Barack Obama
Sure, but I do. I have beliefs about Obama, and I know I can have such beliefs. Surely we’re not radical skeptics to the point of denying that I possess this knowledge. And that’s my point: I know things my Boltzmann brain copy can’t, so we’re evidentially distinguishable.
Surely we’re not radical skeptics to the point of denying that I possess this knowledge.
Of course we are. That’s the big scary implication of the Boltzmann brain scenario. If you know a priori that you can’t be a Boltzman brain then it is easy to exclude them from your reference class. You’re entire case is just argument from incredulity, dressed up.
No, that is not the big scary implication. At least not the one physicists are interested in. The Boltzmann brain problem is not just a dressed up version of Descartes’ evil demon problem. Look, I think there’s a certain kind of skepticism that can’t be refuted because the standards of evidence it demands are unrealistically high. This form of skepticism can be couched in terms of an evil demon, or the Matrix, or Boltzmann brains. However you do it, I think it’s a silly problem. If that was the problem posed by Boltzmann brains, I’d be unconcerned.
The problem I’m interested in is not that the Boltzmann brain hypothesis raises the specter of skepticism; the problem is that it, in combination with the SSA, is claimed to be strong evidence against our cosmological models. This is an entirely different issue from radical skepticism. In fact, this problem explicitly assumes the falsehood of radical skepticism. The hypothesis is supposed to disconfirm cosmological models precisely because we know we’re not Boltzmann brains.
This is an entirely different issue from radical skepticism. In fact, this problem explicitly assumes the falsehood of radical skepticism. The hypothesis is supposed to disconfirm cosmological models precisely because we know we’re not Boltzmann brains.
Right. The argument is a modus tollens on the premise that we could possibly be Boltzmann brains. It’s, a) we are not Boltzmann brains, b) SSA, c) cosmological model that predicts a high preponderance of Boltzmann brains: PICK ONLY TWO. Now it’s entirely reasonable to reject the notion that we are Boltzmann brains on pragmatic grounds. It’s something we might as well assume because there is little point to anything if we don’t. But you can’t dissolve the fact that the SSA and the cosmological model imply that we are Boltzmann brains by relying on our pragmatic insistence that we aren’t (which is what you’re doing with the externalism stuff).
My externalism stuff is just intended to establish that Boltzmann brains and actual humans embedded in stable macroscopic worlds have different evidence available to them. At this point, I need make no claim about which of these is me. So I don’t think the anti-skeptical assumption plays a role here. My claim at this point is just that these two systems are in different epistemic situations (they have different beliefs, knowledge, evidence).
The rejection of skepticism is a separate assumption. As you say, there’s good pragmatic reason to reject skepticism. I’m not sure what you mean by “pragmatic reason”, but if you mean something like “We don’t actually know skepticism is false, but we have to operate under the assumption that it is” then I disagree. We do actually know there is an external world. To claim that we do not is to raise the standard of evidence to an artificially high level. Consistent sensory experience of an object in a variety of circumstances is ordinarily sufficient to claim that we know the object exists (despite the possibility that we may be in the Matrix).
So now we have two premises, both arrived at through different and independent chains of reasoning. The first is that subjective indistinguishability does not entail evidential indistinguishability. The second is that I am not a Boltzmann brain. The combination of these two premises leads to my conclusion, that one might be justified in excluding Boltzmann brains from one’s reference class. Now, a skeptic would attack the second premise. Fair enough, I guess. But realize that is a different premise from the first one. If your objection is skepticism, this objection has nothing to do with semantic externalism. And I think skepticism is a bad (and somewhat pointless) objection.
My claim at this point is just that these two systems are in different epistemic situations (they have different beliefs, knowledge, evidence).
That’s fine. But what matters is that they can’t actually tell they are in different epistemic situations. You’ve identified an objective distinction between Boltzmann brains and causally-embedded people. That difference is essentially: for the latter the external world exists, for the former it does not. But you haven’t provided anyway for a Boltzmann brain or a regular old-fashioned human being to infer anything different about the external world. You’re confusing yourself with word games. A Boltzman brain and a human being might be evidentially distinguishable in that the former’s intentional states don’t actually refer to anything. But their subjective situations are evidentially indistinguishable. Taboo ‘beliefs’ and ‘knowledge’. Their information states are identical. They will come to identical conclusions about everything. The Boltzmann brain copy of pragmatist is just as confident that he is not a Boltzmann brain as you are.
I disagree. We do actually know there is an external world. To claim that we do not is to raise the standard of evidence to an artificially high level. Consistent sensory experience of an object in a variety of circumstances is ordinarily sufficient to claim that we know the object exists (despite the possibility that we may be in the Matrix).
This statement is only true if you reject either the SSA or a cosmological model that predicts most things that are thinking the same thoughts I am are Boltzmann brains. Which is, like, the whole point of the argument and why it’s not actually a separate assumption. The Boltzmann brain idea, like the Simulation argument, is much stronger than typical Cartesian skepticism and they are in no way identical arguments. The former say that most of the things with your subjective experiences are Boltzmann brains/in a computer simulation. That’s very different from saying that there is a possibility an evil demon is tricking you. And the argument you give above for knowing that there is an external world is sufficient to rebut traditional, Cartesian skepticism but it is not sufficient to rebut the Boltzmann brain idea or the Simulation argument. These are more potent skepticisms.
Look at it this way: You have two premises that point to you being a Boltzmann brain. Your reply is that the SSA doesn’t actually suggest you are a Boltzmann brain because your intentional states have referents and the Boltzmann brain’s do not. That’s exactly what the Boltzmann brain copy of you is thinking. Meanwhile the cosmological model you’re working under says that just about everything thinking that thought is wrong.
They [Boltzmann brains and human beings] will come to identical conclusions about everything.
Your argument against my view seems to presume that my view is false. I deny that they will come to identical conclusions about everything. When I reason, I come to conclusions about things in my environment. For example, I came to the conclusion that Obama was born in Hawaii, based on evidence about Obama that was available to me. The Boltzmann brain cannot even refer to Obama, so it cannot come to this conclusion.
The Boltzmann brain copy of pragmatist is just as confident that he is not a Boltzmann brain as you are.
No. The Boltzmann brain copy of pragmatist doesn’t have any beliefs about Boltzmann brains (or brains in general) to be confident about. I know you disagree, but again, that disagreement is what’s at issue here. Restating the disagreement in different ways isn’t really an argument against my position.
This statement is only true if you reject either the SSA or a cosmological model that predicts most things that are thinking the same thoughts I am are Boltzmann brains.
The cosmological model doesn’t predict that there are many Boltzmann brains thinking the same thoughts as me. It predicts that there are many Boltzmann brains in the same brain state as me. Whether the SSA says that I am likely to be one of the Boltzmann brains depends on what the appropriate reference class is. There is good reason to think that the appropriate reference class includes all observers with sufficiently similar evidence as me. I don’t disagree with that version of the SSA. So far, no conflict between SSA + cosmology and the epistemology I’ve described.
What I disagree with is the claim that all subjectively similar observers have to be in the same reference class. The only motivation I can see for this is that subjective similarity entails evidential similarity. But I think there are strong arguments against this. These arguments do not assume anything about whether or not I am a Boltzmann brain. So I don’t see why the arguments I give have to be strong enough to rebut the idea that I’m a Boltzmann brain. That’s not what I’m trying to do. Maybe this comment gives a better idea of how I see the argument I’m responding to, and the nature of my response.
I agree with this claim, but I don’t see how it can be leveraged into the kind of objection I was responding to. Why should the fact that Boltzmann brains could go through an identical neural process convince me that the reasoning instantiated by me going through the neural process is wrong?
Why should the fact that Boltzmann brains could go through an identical neural process convince me that the reasoning instantiated by me going through the neural process is wrong?
The conclusions of this reasoning, when it’s performed by you, are not wrong, but they are wrong when the same reasoning is performed by a Boltzmann brain. In this sense, the process of reasoning is invalid, it doesn’t produce correct conclusions in all circumstances, and that makes it somewhat unsatisfactory, but of course it works well for the class of instantiations that doesn’t include Boltzmann brains.
As a less loaded model of some of the aspects of the problem, consider two atom-by-atom identical copies of a person who are given identical-looking closed boxes, with one box containing a red glove, and another a green glove. If the green-glove copy for some reason decides that the box it’s seeing contains a green glove, then that copy is right. At the same time, if the green-glove copy so decides, then since the copies are identical, the red-glove copy will also decide that its box contains a green glove, and it will be wrong. Since evidence about the content of the boxes is not available to the copies, deciding either way is in some sense incorrect reasoning, even if it happens to produce a correct belief in one of the reasoners, at the cost of producing an incorrect belief in the other.
OK, that’s a good example. Let’s say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones, which sends certain signals to the optic nerve and so on. In the case of the red-glove copy, a thermodynamic fluctuation occurs that leads it to go through the exact same physical process. That is, the fluctuation makes the cones react just as if they had interacted with green photons, and the downstream process is exactly the same. In this case, you’d want to say both duplicates have unjustified beliefs? The green-glove duplicate arrived at its belief through a reliable process, the red-glove duplicate didn’t. I just don’t see why our conclusion about the justification has to be the same across both copies. Even if I bought this constraint, I’d want to say that both of their beliefs are in fact justified. The red-glove one’s belief is false, but false beliefs can be justified. The red-glove copy just got really unlucky.
Let’s say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones
In my example, the gloves are not observed, the boxes are closed, the states of the brains of both copies and the nerve impulses they generate, and words they say will all be by construction identical during the thought experiment.
(See also the edit to the grandparent comment, it could be the case that we already agree.)
Whoops, missed that bit. Of course, if either copy is forming a judgment about the glove’s color without actual empirical contact with the glove, then its belief is unjustified. I don’t think the identity of the copies is relevant to our judgment in this case. What would you say about the example I gave, where the box is open and the green-glove copy actually sees the glove. By hypothesis, the brains of both copies remain physically identical throughout the process. In this case, do you think we should judge that there is something problematic about the green-glove copy’s judgment that the glove is green? This case seems far more analogous to a situation involving a human and a Boltzmann brain.
ETA: OK, I just saw the edit. We’re closer to agreement than I thought, but I still don’t get the “unsatisfactory” part. In the example I gave, I don’t think there’s anything unsatisfactory about the green-glove copy’s belief formation mechanism. It’s a paradigm example of forming a belief through a reliable process.
The sense in which your (correct) belief that you are not a Boltzmann brain is justified (or unjustified) seems to me analogous to the situation with the green-glove copy believing that its unobserved glove is green. Justification is a tricky thing: actually not being a Boltzmann brain, or actually being the green-glove copy could in some sense be said to justify the respective beliefs, without a need to rely on distinguishing evidence, but it’s not entirely clear to me how that works.
Your argument against my view seems to presume that my view is false. I deny that they will come to identical conclusions about everything. When I reason, I come to conclusions about things in my environment. For example, I came to the conclusion that Obama was born in Hawaii, based on evidence about Obama that was available to me. The Boltzmann brain cannot even refer to Obama, so it cannot come to this conclusion.
Do you deny that the Boltzmann brain thinks it can refer to Obama? I.e. that it has some mental representation of an external world that is indistinguishable from your own except insofar as it does not successfully refer to anything in an external world?
If your answer is “Yes, I deny it.” Then I don’t think you understand what it means to have identical brain states or your view presumes metaphysically spooky features that you haven’t unpacked. But what I understand your position to be is that you don’t deny it but that you think that the Boltzmann brain’s representation of the external world doesn’t can’t come to identical conclusions about the world because it’s representation doesn’t successfully refer to anything.
If you want to say that a belief must have a causal connection to the thing it is trying to refer to: fine. We can call what Boltzmann brains have “pseudo-beliefs”. Now, how can you tell if you have beliefs or pseudo-beliefs? You can’t. Claiming that the subjective situations of the human and the Boltzmann brain are evidentially distinguishable is totally bizarre when they themselves can’t make the distinction.
The reason people are focusing on the viability of the skeptical scenario in their responses to you is that it looks like the reason you think this is a viable evidential distinction is that you are unreasonably confident that your mental states successfully refer to an external world. Moreover, a solution to the argument that doesn’t reject the SSA or the cosmological model shouldn’t just play with the meaning of words—readers should react with a sense of “Oh, good. The external world exists after all.” If they don’t it’s a good indication that you haven’t really addressed the problem. This is true even though the argument starts by assuming that we are not Boltzmann brains since obviously the logical structure remains intact.
You should to stop assuming that everyone is misunderstanding you. Everyone is giving you the same criticism in different words and your argument is not being upvoted. Update on this information.
Do you deny that the Boltzmann brain thinks it can refer to Obama? I.e. that it has some mental representation of an external world that is indistinguishable from your own except insofar as it does not successfully refer to anything in an external world?
Yes. I don’t think the Boltzmann brain has a representation of the external world at all. Whether or not a system state is a representation of something else is not an intrinsic property of the state. It depends on how the state was produced and how it is used. If you disagree, could you articulate what it is you think makes a state representational?
I could use a salt and pepper shaker to represent cars when I’m at a restaurant telling my friend about a collision I recently experienced. Surely you’d agree that the particular arrangement of those shakers I constructed is not intrinsically representational. If they had ended up in that arrangement by chance they wouldn’t be representing anything. Why do you think neural arrangements are different?
Claiming that the subjective situations of the human and the Boltzmann brain are evidentially distinguishable is totally bizarre when they themselves can’t make the distinction.
I don’t think this is what I’m claiming. I’m not sure what you mean by the “subjective situations” of the human and the Boltzmann brain, but I don’t think I’m claiming that the subjective situations themselves are evidentially distinguishable. I don’t think I can point to some aspect of my phenomenal experience that proves I’m not a Boltzmann brain.
I’m claiming that the evidence available to me goes beyond my phenomenal experience, that one’s evidence isn’t fully determined by one’s “subjective situation”. Boltzmann brains and human beings have different evidence. Is this the view you regard as bizarre? It doesn’t seem all that different from the view expressed in the Sequences here.
Further, I think if two observers have vastly different sets of evidence, then it is permissibile to place them in separate reference classes when reasoning about certain anthropic problems.
You should to stop assuming that everyone is misunderstanding you.
I don’t assume this. I think some people are misunderstanding me. Others have expressed a position which I see as actually opposed to my position, so I’m pretty sure they have understood me. I think they’re wrong, though.
Your incorrect prediction about how I would respond to your question is an indication that you have at least partially misunderstood me. I suspect the number of people who have misunderstood me on this thread is explicable by a lack of clarity on my part.
Everyone is giving you the same criticism in different words and your argument is not being upvoted. Update on this information.
I have, but it does not shift my credence enough to convince me I’m wrong. Does the fact that a majority of philosophers express agreement with externalism about mental content lead you to update your position somewhat? If you are unconvinced that I am accurately representing externalism, I encourage you to read the SEP article I linked and make up your own mind.
Yes. I don’t think the Boltzmann brain has a representation of the external world at all.
Not “the” external world. “An external world” and when I say external world I don’t mean a different, actually existing external world but a symbolic system that purports to represent an external world just as a humans brain contains a symbolic system (or something like that) that actually represents the external world.
If they had ended up in that arrangement by chance they wouldn’t be representing anything. Why do you think neural arrangements are different?
The question here isn’t the relationship between the symbolic system/neural arrangement (call it S) and the system S purports to represent (call it R). The question is about the relation between S and the rest of the neural arrangement that produces phenomenal experience (call it P). If I understood exactly how that worked I would have solved the problem of consciousness. I have not done so. I’m okay with an externalism that simply says for S to count as an intentional state it must have a causal connection to R. But a position that a subject’s phenomenal experience supervenes not just on P and S but also on R is much more radical than typical externalism and would, absent a lot of explanation, imply that physicalism is false.
You’re not a phenomenal externalist, correct?
I’m claiming that the evidence available to me goes beyond my phenomenal experience, that one’s evidence isn’t fully determined by one’s “subjective situation”. Boltzmann brains and human beings have different evidence. Is this the view you regard as bizarre? It doesn’t seem all that different from the view expressed in the Sequences here.
Ah, this might be the issue of contention. What makes phenomenal experience evidence of anything is that we have good reason to think it is causally entangled in an external world. But a Boltzmann brain would have exactly the same reasons. That is, an external, persistent, physical world is the best explanation of our sensory experiences and so we take our sensory experiences to tell us things about that world which let’s us predict and manipulate it. But the Boltzmann brain has exactly the same sensory experiences (and memories). It will make the same predictions (regarding future sensory data) and run the same experiments (except the Boltzmann brain’s will be ‘imaginary’ in a sense) which will return the same results (in terms of sensory experiences).
I don’t really want this to be about the definition of evidence. But surely having different sets of evidence implies that a perfect Bayesian reasoner wouldn’t return the same updates and credences for both sets!
No, I’m not, you’ll be glad to hear. There are limits even to my lunacy. I was just objecting to your characterization of the BB’s brain state as a representation. I’m not even all that happy with calling it a purported representation. If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who’s doing the purporting? Is it sufficient that some system could be used as a representation for it to count as a purported representation? In that case, everything is a purported representation.
I think there’s a tendency to assume our mental representations somehow have intrinsic representational properties that we wouldn’t attribute to other external representations. This is probably because phenomenal representation seems so immediate. If a Boltzmann brain’s visual system were in the same state mine is in when I see my mother, then maybe the brain isn’t visually representing my mother, but surely it is representing a woman, or at least something. Well, no, I don’t think so. If a physical system that is atom-for-atom identical to a photograph of my mother congealed out of a high entropy soup it would not be a representation of my mother. It wouldn’t be a representation at all, and not even a purported one.
But surely having different sets of evidence implies that a perfect Bayesian reasoner wouldn’t return the same updates and credences for both sets!
First, the Boltzmann brain and I do not return the same updates. The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
Second, I disagree with your claim that perfect Bayesian reasoners would return different updates for different sets of evidence. I see no reason to believe this is true. As long as the likelihood ratios (and priors) are the same, the updates will be the same, but likelihood ratios aren’t unique to particular pieces of evidence. As an example, suppose a hypothesis H predicts a 30% chance of observing a piece of evidence E1, and the chance of observing that evidence if H had been false is 10%. It seems to me entirely possible that there is a totally different piece of evidence, E2, which H also predicts has a 30% chance of being observed, and ~H predicts has a 10% chance of being observed. A Bayesian reasoner who updated on E1 would return the same credence as one who updated on E2, even though E1 and E2 are different. None of this seems particularly controversial. Am I misunderstanding your claim?
Yes, but that’s my fault. Let’s put it this way. A set of evidence is indistinguishable from another set of evidence if and only if an ideal Bayesian reasoner can update on either and then, update not at all after learning the other set.
First, the Boltzmann brain and I do not return the same updates.
That’s not the issue. Neither you nor your Boltzmann brain copy is an ideal Bayesian reasoner. The question is: what happens when you feed your evidence to an ideal Bayesian reasoner and then feed the Boltzmann brain’s evidence. Will the ideal Bayesian reasoner find anything new to update on? What if you reverse the process and feed the Boltzmann brain’s evidence first? Will the ideal Bayesian reasoner update then?
The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
The pancomputation issue is tricky but has nothing to do with this. By stipulation Boltzmann brains are physically similar enough to humans to make computations that produce exactly similar brain states. Moreover, you say you are not a phenomenal externalist so the computations made by Boltzmann brains apparently produce exactly similar phenomenal experiences. Pancomputation isn’t any more of a problem for me than it is for you.
Perhaps this is just going to end up being a reductio on externalism.
Who’s doing the purporting?
The Boltzmann brain, obviously. Are you denying that a Boltzmann brain can have any intentional states. I.e. Can it believe things about it’s phenomenal experience, qualia, or other mental states. Can’t it believe it believes something?
If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who’s doing the purporting?
Well, the simpler part of this is that representation is a three-place predicate: system A represents system B to observer C1, which does not imply that A represents B to C2, nor does it prevent A from representing B2 to C2. (Nor, indeed, to C1.)
So, yes, a random salt-and-pepper-shaker arrangement might represent any number of things to any number of observers.
A purported representation is presumably some system A about which the claim is made (by anyone capable of making claims) that there exists a (B, C) pair such that A represents B to C.
But there’s a deeper disconnect here having to do with what it means for A to represent B to C in the first place, which we’ve discussed elsethread.
Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
Sure. And if I had a brain that could in fact treat all theoretically possible isomorphisms as salient at one time, I would indeed treat every physical system as performing every computation, and also as representing every other physical system. In fact, though, I lack such a brain; what my brain actually does is treat a vanishingly small fraction of theoretically possible isomorphisms as salient, and I am therefore restricted to only treating certain systems as performing certain computations and as representing certain other systems.
I have beliefs about Obama, and I know I can have such beliefs. Surely we’re not radical skeptics to the point of denying that I possess this knowledge.
You know about your concept of Obama. You have memories of sensations which seem to validate parts of this concept. But you do not know that your world contains an object matching the concept.
You don’t think I know that Obama exists (out there, in the world, not in my head)? It sounds like you’re using the word “knowledge” very differently from the way it’s ordinarily used. According to you, can we know anything about the world outside our heads?
It sounds like you’re using the word “knowledge” very differently from the way it’s ordinarily used.
Well, sure, people say they “know” all sorts of things that they don’t actually know. It would be formidably difficult to speak and write in a way that constantly acknowledges the layered uncertainty actually present in the situation. Celia Green says the uncertainty is total, which isn’t literally true, but it’s close to the truth.
Experience consists of an ongoing collision between belief and reality, and reality is that I don’t know what will happen even one second from now, I don’t know the true causes of my sensations, and so on—though I may have beliefs about these matters. My knowledge is a small island in an ocean of pragmatic belief, and mostly concerns transient superficial sensory facts, matters known by definition and deduction, perhaps some especially vivid memories tying together sensation and concept, and a very slowly growing core of ontological facts obtained by phenomenological reflection, such as the existence of time, thought, sensation, etc. Procedural knowledge also deserves a separate mention, though it is essentially a matter of knowing how to try to do something; success is not assured.
Surely we’re not radical skeptics to the point of denying that I possess this knowledge.
We’re radical skeptics to the point of seriously considering that you may be a Boltzmann brain. And that’s what you think that would mean. (I also disagree with the way you use certain words, but others have said that.)
We’re radical skeptics to the point of seriously considering that you may be a Boltzmann brain.
Maybe you are, but I’m not. Nothing in my argument (or the Boltzmann brain argument in general) requires one to seriously entertain the possibility that one is a Boltzmann brain.
Well, that’s what you wanted to convince people of. And your argument in the OP is wrong, and others have explained why (incorrect word usage and word games).
There are other, better arguments: for example I’m as simple an observer as I might have been, which argues strongly that I’m not a chance fluctuation. It’s legitimate to take such arguments as evidence against big-universe theories where BBs flourish. But if other data suggests a big universe, then there’s still an unanswered question to be resolved.
And your argument in the OP is wrong, and others have explained why (incorrect word usage and word games).
Just to clarify: Is the word “belief” one of the words you think I’m using incorrectly? And do you think that the incorrect usage is responsible for me saying things like “Boltzmann brains couldn’t have beliefs about Obama”? Relatedly, do you think Boltzmann brains could in fact have beliefs about Obama?
I’m not trying to get into an argument about this here. I just want a sense of what people think about this. I might defend my claims about belief in a separate post later.
I can’t speak for Dan, of course, but for my own part: I think this whole discussion has gotten muddled by failing to distinguish clearly enough between claims about the world and claims about language.
I’m not exactly sure what you or Dan mean by “incorrect word usage” here so I can’t easily answer your first question, but I think the distinction you draw between beliefs and brain-states-that-could-be-beliefs-if-they-had-intentional-content-but-since-they-don’t-aren’t is not an important distinction, and using the label “belief” to describe the former but not the latter is not a lexical choice I endorse.
I think that lexical choice is responsible for you saying things like “Boltzmann brains couldn’t have beliefs about Obama.”
I think Boltzmann brains can enter brain-states which correspond to the brain-states that you would call “beliefs about Obama” were I to enter them, and I consider that correspondence strong enough that I see no justification to not also call the BB’s brain-states “beliefs about Obama.”
As far as I can tell, you and I agree about all of this except for what things in the world the word “belief” properly labels.
Do you feel the same way about the word “evidence”? Do you feel comfortable saying that an observer can have evidence regarding the state of some external system even if its brain state is not appropriately causally entangled with that system?
I obviously agree with you that how we use “belief” and “evidence” is a lexical choice. But I think it is a lexical choice with important consequences. Using these words in an internalist manner generally indicates (perhaps even encourages) a failure to recognize the importance of distinguishing between syntax and semantics, a failure I think has been responsible for a lot of confused philosophical thinking. But this is a subject for another post.
This gets difficult, because there’s a whole set of related terms I suspect we aren’t quite using the same way, so there’s a lot of underbrush that needs to get cleared to make clear communication possible.
When I’m trying to be precise, I talk about experiences providing evidence which constrains expectations of future experiences. That said, in practice I do also treat clusters of experience that demonstrate persistent patterns of correlation as evidence of the state of external systems, though I mostly think of that sort of talk as kinda sloppy shorthand for an otherwise too-tedious-to-talk-about set of predicted experiences.
So I feel reasonably comfortable saying that an experience E1 can serve as evidence of an external system S1. Even if I don’t actually believe that S1 exists, I’m still reasonably comfortable saying that E1 is evidence of S1. (E.g., being told that Santa Claus exists is evidence of the existence of Santa Claus, even if it turns out everyone is lying.)
If I have a whole cluster of experiences E1...En, all of which reinforce one another and reinforce my inference of S1, and I don’t have any experiences which serve as evidence that S1 doesn’t exist, I start to have compelling evidence of S1 and my confidence in S1 increases. All of this can occur even if it turns out that S1 doesn’t actually exist. And, of course, some other system S2 can exist without my having any inkling of it. This is all fairly unproblematic.
So, moving on to the condition you’re describing, where E1 causes me to infer the existence of S1, and S1 actually does exist, but S1 is not causally entangled with E1. I find it simpler to think about a similar condition where there exist two external systems, S1 and S2, such that S2 causes E1 and on the basis of E1 I infer the existence of S1, while remaining ignorant of S2. For example, I believe Alice is my birth mother, but in fact Alice (S1) and my birth mother (S2) are separate people. My birth mother sends me an anonymous email (E1) saying “I am your birth mother, and I have cancer.” I infer that Alice has cancer. It turns out that Alice does have cancer, but that this had no causal relationship with the email being sent.
I am comfortable in such an arrangement saying that E1 is evidence that S1 has cancer, even though E1 is not causally entangled with S1′s cancer.
Further, when discussing such an arrangement, I can say that the brain-states caused by E1 are about S1 or about S2 or about both or neither, and it’s not at all clear to me what if anything depends on which of those lexical choices I make. Mostly, I think asking what E1 is really “about” is a wrong question; if it is really about anything it’s about the entire conjoined state of the universe, including both S1 and S2 and everything else, but really who cares?
And if instead there is no S2, and E1 just spontaneously comes into existence, the situation is basically the same as the above, it’s just harder for me to come up with plausible examples.
Perhaps it would help to introduce a distinction here. Let’s distinguish internal evidence and external evidence. P1 counts as internal evidence for P2 if it is procedurally rational for me to alter my credence in P2 once I come to accept P1, given my background knowledge. P1 is external evidence for P2 if the truth of P1 genuinely counterfactually depends on the truth of P2. That is, P1 would be false (or less frequently true, if we’re dealing with statistical claims) if P2 were false. A proposition can be internal evidence without being external evidence. In your anonymous letter example, the letter is internal evidence but not external evidence.
Which conception of evidence is the right one to use will probably depend on context. When we are attempting to describe an individual’s epistemic status—the amount of reliable information they possess about the world—then it seems that external evidence is the relevant variety of evidence to consider. And if two observers differ substantially in the external evidence available to them, it seems justifiable to place them in separate reference classes for certain anthropic explanations. Going back to an early example of Eliezer’s:
I’m going to close with the thought experiment that initially convinced me of the falsity of the Modesty Argument. In the beginning it seemed to me reasonable that if feelings of 99% certainty were associated with a 70% frequency of true statements, on average across the global population, then the state of 99% certainty was like a “pointer” to 70% probability. But at one point I thought: “What should an (AI) superintelligence say in the same situation? Should it treat its 99% probability estimates as 70% probability estimates because so many human beings make the same mistake?” In particular, it occurred to me that, on the day the first true superintelligence was born, it would be undeniably true that—across the whole of Earth’s history—the enormously vast majority of entities who had believed themselves superintelligent would be wrong. The majority of the referents of the pointer “I am a superintelligence” would be schizophrenics who believed they were God.
A superintelligence doesn’t just believe the bald statement that it is a superintelligence—it presumably possesses a very detailed, very accurate self-model of its own cognitive systems, tracks in detail its own calibration, and so on. But if you tell this to a mental patient, the mental patient can immediately respond: “Ah, but I too possess a very detailed, very accurate self-model!” The mental patient may even come to sincerely believe this, in the moment of the reply. Does that mean the superintelligence should wonder if it is a mental patient? This is the opposite extreme of Russell Wallace asking if a rock could have been you, since it doesn’t know if it’s you or the rock.
If the superintelligence were engaging in anthropic reasoning, should it put itself in the same reference class as the mental patients in all cases? If we think identical (or similar) internal evidence requires that they be in the same reference class, then I think the answer may be yes. But I think the answer is fairly obviously no, and this is because of the vast difference in the epistemic situations of the superintelligence and the mental patients, a difference attributable to differences in external evidence.
I accept your working definitions for “internal evidence” and “external evidence.”
When we are attempting to describe an individual’s epistemic status—the amount of reliable information they possess about the world—then it seems that external evidence is the relevant variety of evidence to consider
I want to be a little careful about the words “epistemic status” and “reliable information,” because a lot of confusion can be introduced through the use of terms that abstract.
I remember reading once that courtship behavior in robins is triggered by the visual stimulus of a patch of red taller than it is wide. I have no idea if this is actually true, but suppose it is. The idea was that the ancestral robin environment didn’t contain other stimuli like that other than female robins in estrus, so it was a reliable piece of evidence to use at the time. Now, of course, there are lots of visual stimuli in that category, so you get robins initiating courtship displays at red socks on clotheslines and at Coke cans.
So, OK. Given that, and using your terms, and assuming it makes any sense to describe what a robin does here as updating on evidence at all, then a vertical red swatch is always internal evidence of a fertile female, and it was external evidence a million years ago (when it “genuinely” counterfactually depended on the presence of such a female) but it is not now. If we put some robins in an environment from which we eliminate all other red things, it would be external evidence again. (Yes?)
If what I am interested in is whether a given robin is correct about whether it’s in the presence of a fertile female, external evidence is the relevant variety of information to consider.
If what I am interested in is what conclusions the robin will actually reach about whether it’s in the presence of a fertile female, internal evidence is the relevant variety of information to consider.
If that is consistent with your claim about the robin’s epistemic status and about the amount of reliable information the robin possesses about the world, then great, I’m with you so far. (If not, this is perhaps a good place to back up and see where we diverged.)
if two observers differ substantially in the external evidence available to them, it seems justifiable to place them in separate reference classes for certain anthropic explanations.
Sure, when available external evidence is particularly relevant to those anthropic explanations.
If the superintelligence were engaging in anthropic reasoning, should it put itself in the same reference class as the mental patients in all cases?
So A and B both believe they’re superintelligences. As it happens, A is in fact a SI, and B is in fact a mental patient. And the question is, should A consider itself in the same reference class as B. Yes?
...I think the answer is fairly obviously no, and this is because of the vast difference in the epistemic situations of the superintelligence and the mental patients,
Absolutely agreed. I don’t endorse any decision theory that results in A concluding that it’s more likely to be a mental patient than a SI in a typical situation like this, and this is precisely because of the nature of the information available to A in such a situation.
If we think identical (or similar) internal evidence requires that they be in the same reference class, then I think the answer may be yes.
Wait, what?
Why in the world would A and B have similar internal evidence?
I mean, in any normal environment, if A is a superintelligence and B is a mental patient, I would expect A to have loads of information on the basis of which it is procedurally rational for A to conclude that A is in a different reference class than B. Which is internal evidence, on your account. No?
But, OK. If I assume that A and B do have similar internal evidence… huh. Well, that implicitly assumes that A is in a pathologically twisted epistemic environment. I have trouble imagining such an environment, but the world is more complex than I can imagine. So, OK, sure, I can assume such an environment, in a suitably hand-waving sort of way.
And sure, I agree with you: in such an environment, A should consider itself in the same reference class as B. A is mistaken, of course, which is no surprise given that it’s in such an epistemically tainted environment.
Now, I suppose one might say something like “Sure, A is justified in doing so, but A should not do so, because A should not believe falsehoods.” Which would reveal a disconnect relating to the word “should,” in addition to everything else. (When I say that A should believe falsehoods in this situation, I mean I endorse the decision procedure that leads to doing so, not that I endorse the result.)
But we at least ought to agree, given your word usage, that it is procedurally rational for A to conclude that it’s in the same reference class as B in such a tainted environment, even though that isn’t true. Yes?
Yes, yes, and yes—in the usual meaning of “belief”. There are different-but-related meanings which are sometimes used, but the way you use it is completely unlike the usual meanings.
More importantly, your state that a BB can’t have “beliefs” in your sense, which is a (re)definition—that merely makes your words unclear and misunderstood—but then you conclude that because you have “beliefs” you are not a BB. This is simply wrong, even using your own definition of “belief”—because under your definition, having “real beliefs” is not a measurable fact of someone’s brain in reality, and so you can never make conclusions like “I have real beliefs” or “I am not a BB” based on your own brain state. (And all of our conclusions are based on our brain states.)
IOW: a BB similar to yourself, would reach the same conclusions as you—that it is not a BB—but it would be wrong. However, it would be reasoning from the exact same evidence as you. Therefore, your reasoning is faulty.
IOW: a BB similar to yourself, would reach the same conclusions as you—that it is not a BB—but it would be wrong. However, it would be reasoning from the exact same evidence as you.
I disagree that it would be reasoning from the exact same evidence as me. I’m an externalist about evidence too, not just about belief.
Again, you’re using the word “evidence” differently from everyone else. This only serves to confuse the discussion.
Tabooing “evidence”, what I was saying is that a BB would have the same initial brain-state (what I termed “evidence”) and therefore would achieve the same final brain-state (what I termed “conclusions”). The laws of physics for its brain-state evolution, and the physical causality between the two states, are the same as for your brain. This is trivially so by the very definition of a BB that is sufficiently similar to your brain.
I don’t know what you mean by “externalist evidence” and I don’t see how it would matter. The considerations that apply here are exactly the same as in Eliezer’s discussion of p-zombies. Imagine a BB which is a slightly larger fluctuation than a mere brain; it is a fluctuation of a whole body, which can live for a few seconds, and can speak and think in that time. It would think and say “I am conscious” for the same reasons as you do; therefore it is not a p-zombie. It would think and say “Barack Obama exists” for the same reasons as you do; therefore what everyone-but-you calls its knowledge and its beliefs about “Barack Obama”, are of the same kind as yours.
Wait, would an equivalent way to put it be evidential as in “as viewed by an outside observer” as opposed to “from the inside” (the perspective of a Boltzmann brain)?
Whether or not a Boltzman brain could successfully refer to Barack Obama doesn’t change the fact that your Boltzman brain copy doesn’t know it can’t have beliefs about Barack Obama. It’s a scenario of radical skepticism. We can deny that Boltzman brains have knowledge but they don’t know any better.
Sure, but I do. I have beliefs about Obama, and I know I can have such beliefs. Surely we’re not radical skeptics to the point of denying that I possess this knowledge. And that’s my point: I know things my Boltzmann brain copy can’t, so we’re evidentially distinguishable.
Of course we are. That’s the big scary implication of the Boltzmann brain scenario. If you know a priori that you can’t be a Boltzman brain then it is easy to exclude them from your reference class. You’re entire case is just argument from incredulity, dressed up.
No, that is not the big scary implication. At least not the one physicists are interested in. The Boltzmann brain problem is not just a dressed up version of Descartes’ evil demon problem. Look, I think there’s a certain kind of skepticism that can’t be refuted because the standards of evidence it demands are unrealistically high. This form of skepticism can be couched in terms of an evil demon, or the Matrix, or Boltzmann brains. However you do it, I think it’s a silly problem. If that was the problem posed by Boltzmann brains, I’d be unconcerned.
The problem I’m interested in is not that the Boltzmann brain hypothesis raises the specter of skepticism; the problem is that it, in combination with the SSA, is claimed to be strong evidence against our cosmological models. This is an entirely different issue from radical skepticism. In fact, this problem explicitly assumes the falsehood of radical skepticism. The hypothesis is supposed to disconfirm cosmological models precisely because we know we’re not Boltzmann brains.
Right. The argument is a modus tollens on the premise that we could possibly be Boltzmann brains. It’s, a) we are not Boltzmann brains, b) SSA, c) cosmological model that predicts a high preponderance of Boltzmann brains: PICK ONLY TWO. Now it’s entirely reasonable to reject the notion that we are Boltzmann brains on pragmatic grounds. It’s something we might as well assume because there is little point to anything if we don’t. But you can’t dissolve the fact that the SSA and the cosmological model imply that we are Boltzmann brains by relying on our pragmatic insistence that we aren’t (which is what you’re doing with the externalism stuff).
My externalism stuff is just intended to establish that Boltzmann brains and actual humans embedded in stable macroscopic worlds have different evidence available to them. At this point, I need make no claim about which of these is me. So I don’t think the anti-skeptical assumption plays a role here. My claim at this point is just that these two systems are in different epistemic situations (they have different beliefs, knowledge, evidence).
The rejection of skepticism is a separate assumption. As you say, there’s good pragmatic reason to reject skepticism. I’m not sure what you mean by “pragmatic reason”, but if you mean something like “We don’t actually know skepticism is false, but we have to operate under the assumption that it is” then I disagree. We do actually know there is an external world. To claim that we do not is to raise the standard of evidence to an artificially high level. Consistent sensory experience of an object in a variety of circumstances is ordinarily sufficient to claim that we know the object exists (despite the possibility that we may be in the Matrix).
So now we have two premises, both arrived at through different and independent chains of reasoning. The first is that subjective indistinguishability does not entail evidential indistinguishability. The second is that I am not a Boltzmann brain. The combination of these two premises leads to my conclusion, that one might be justified in excluding Boltzmann brains from one’s reference class. Now, a skeptic would attack the second premise. Fair enough, I guess. But realize that is a different premise from the first one. If your objection is skepticism, this objection has nothing to do with semantic externalism. And I think skepticism is a bad (and somewhat pointless) objection.
That’s fine. But what matters is that they can’t actually tell they are in different epistemic situations. You’ve identified an objective distinction between Boltzmann brains and causally-embedded people. That difference is essentially: for the latter the external world exists, for the former it does not. But you haven’t provided anyway for a Boltzmann brain or a regular old-fashioned human being to infer anything different about the external world. You’re confusing yourself with word games. A Boltzman brain and a human being might be evidentially distinguishable in that the former’s intentional states don’t actually refer to anything. But their subjective situations are evidentially indistinguishable. Taboo ‘beliefs’ and ‘knowledge’. Their information states are identical. They will come to identical conclusions about everything. The Boltzmann brain copy of pragmatist is just as confident that he is not a Boltzmann brain as you are.
This statement is only true if you reject either the SSA or a cosmological model that predicts most things that are thinking the same thoughts I am are Boltzmann brains. Which is, like, the whole point of the argument and why it’s not actually a separate assumption. The Boltzmann brain idea, like the Simulation argument, is much stronger than typical Cartesian skepticism and they are in no way identical arguments. The former say that most of the things with your subjective experiences are Boltzmann brains/in a computer simulation. That’s very different from saying that there is a possibility an evil demon is tricking you. And the argument you give above for knowing that there is an external world is sufficient to rebut traditional, Cartesian skepticism but it is not sufficient to rebut the Boltzmann brain idea or the Simulation argument. These are more potent skepticisms.
Look at it this way: You have two premises that point to you being a Boltzmann brain. Your reply is that the SSA doesn’t actually suggest you are a Boltzmann brain because your intentional states have referents and the Boltzmann brain’s do not. That’s exactly what the Boltzmann brain copy of you is thinking. Meanwhile the cosmological model you’re working under says that just about everything thinking that thought is wrong.
Your argument against my view seems to presume that my view is false. I deny that they will come to identical conclusions about everything. When I reason, I come to conclusions about things in my environment. For example, I came to the conclusion that Obama was born in Hawaii, based on evidence about Obama that was available to me. The Boltzmann brain cannot even refer to Obama, so it cannot come to this conclusion.
No. The Boltzmann brain copy of pragmatist doesn’t have any beliefs about Boltzmann brains (or brains in general) to be confident about. I know you disagree, but again, that disagreement is what’s at issue here. Restating the disagreement in different ways isn’t really an argument against my position.
The cosmological model doesn’t predict that there are many Boltzmann brains thinking the same thoughts as me. It predicts that there are many Boltzmann brains in the same brain state as me. Whether the SSA says that I am likely to be one of the Boltzmann brains depends on what the appropriate reference class is. There is good reason to think that the appropriate reference class includes all observers with sufficiently similar evidence as me. I don’t disagree with that version of the SSA. So far, no conflict between SSA + cosmology and the epistemology I’ve described.
What I disagree with is the claim that all subjectively similar observers have to be in the same reference class. The only motivation I can see for this is that subjective similarity entails evidential similarity. But I think there are strong arguments against this. These arguments do not assume anything about whether or not I am a Boltzmann brain. So I don’t see why the arguments I give have to be strong enough to rebut the idea that I’m a Boltzmann brain. That’s not what I’m trying to do. Maybe this comment gives a better idea of how I see the argument I’m responding to, and the nature of my response.
The claim is merely that they will produce identical subsequent brain states and identical nerve impulses.
I agree with this claim, but I don’t see how it can be leveraged into the kind of objection I was responding to. Why should the fact that Boltzmann brains could go through an identical neural process convince me that the reasoning instantiated by me going through the neural process is wrong?
The conclusions of this reasoning, when it’s performed by you, are not wrong, but they are wrong when the same reasoning is performed by a Boltzmann brain. In this sense, the process of reasoning is invalid, it doesn’t produce correct conclusions in all circumstances, and that makes it somewhat unsatisfactory, but of course it works well for the class of instantiations that doesn’t include Boltzmann brains.
As a less loaded model of some of the aspects of the problem, consider two atom-by-atom identical copies of a person who are given identical-looking closed boxes, with one box containing a red glove, and another a green glove. If the green-glove copy for some reason decides that the box it’s seeing contains a green glove, then that copy is right. At the same time, if the green-glove copy so decides, then since the copies are identical, the red-glove copy will also decide that its box contains a green glove, and it will be wrong. Since evidence about the content of the boxes is not available to the copies, deciding either way is in some sense incorrect reasoning, even if it happens to produce a correct belief in one of the reasoners, at the cost of producing an incorrect belief in the other.
OK, that’s a good example. Let’s say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones, which sends certain signals to the optic nerve and so on. In the case of the red-glove copy, a thermodynamic fluctuation occurs that leads it to go through the exact same physical process. That is, the fluctuation makes the cones react just as if they had interacted with green photons, and the downstream process is exactly the same. In this case, you’d want to say both duplicates have unjustified beliefs? The green-glove duplicate arrived at its belief through a reliable process, the red-glove duplicate didn’t. I just don’t see why our conclusion about the justification has to be the same across both copies. Even if I bought this constraint, I’d want to say that both of their beliefs are in fact justified. The red-glove one’s belief is false, but false beliefs can be justified. The red-glove copy just got really unlucky.
In my example, the gloves are not observed, the boxes are closed, the states of the brains of both copies and the nerve impulses they generate, and words they say will all be by construction identical during the thought experiment.
(See also the edit to the grandparent comment, it could be the case that we already agree.)
Whoops, missed that bit. Of course, if either copy is forming a judgment about the glove’s color without actual empirical contact with the glove, then its belief is unjustified. I don’t think the identity of the copies is relevant to our judgment in this case. What would you say about the example I gave, where the box is open and the green-glove copy actually sees the glove. By hypothesis, the brains of both copies remain physically identical throughout the process. In this case, do you think we should judge that there is something problematic about the green-glove copy’s judgment that the glove is green? This case seems far more analogous to a situation involving a human and a Boltzmann brain.
ETA: OK, I just saw the edit. We’re closer to agreement than I thought, but I still don’t get the “unsatisfactory” part. In the example I gave, I don’t think there’s anything unsatisfactory about the green-glove copy’s belief formation mechanism. It’s a paradigm example of forming a belief through a reliable process.
The sense in which your (correct) belief that you are not a Boltzmann brain is justified (or unjustified) seems to me analogous to the situation with the green-glove copy believing that its unobserved glove is green. Justification is a tricky thing: actually not being a Boltzmann brain, or actually being the green-glove copy could in some sense be said to justify the respective beliefs, without a need to rely on distinguishing evidence, but it’s not entirely clear to me how that works.
Do you deny that the Boltzmann brain thinks it can refer to Obama? I.e. that it has some mental representation of an external world that is indistinguishable from your own except insofar as it does not successfully refer to anything in an external world?
If your answer is “Yes, I deny it.” Then I don’t think you understand what it means to have identical brain states or your view presumes metaphysically spooky features that you haven’t unpacked. But what I understand your position to be is that you don’t deny it but that you think that the Boltzmann brain’s representation of the external world doesn’t can’t come to identical conclusions about the world because it’s representation doesn’t successfully refer to anything.
If you want to say that a belief must have a causal connection to the thing it is trying to refer to: fine. We can call what Boltzmann brains have “pseudo-beliefs”. Now, how can you tell if you have beliefs or pseudo-beliefs? You can’t. Claiming that the subjective situations of the human and the Boltzmann brain are evidentially distinguishable is totally bizarre when they themselves can’t make the distinction.
The reason people are focusing on the viability of the skeptical scenario in their responses to you is that it looks like the reason you think this is a viable evidential distinction is that you are unreasonably confident that your mental states successfully refer to an external world. Moreover, a solution to the argument that doesn’t reject the SSA or the cosmological model shouldn’t just play with the meaning of words—readers should react with a sense of “Oh, good. The external world exists after all.” If they don’t it’s a good indication that you haven’t really addressed the problem. This is true even though the argument starts by assuming that we are not Boltzmann brains since obviously the logical structure remains intact.
You should to stop assuming that everyone is misunderstanding you. Everyone is giving you the same criticism in different words and your argument is not being upvoted. Update on this information.
Yes. I don’t think the Boltzmann brain has a representation of the external world at all. Whether or not a system state is a representation of something else is not an intrinsic property of the state. It depends on how the state was produced and how it is used. If you disagree, could you articulate what it is you think makes a state representational?
I could use a salt and pepper shaker to represent cars when I’m at a restaurant telling my friend about a collision I recently experienced. Surely you’d agree that the particular arrangement of those shakers I constructed is not intrinsically representational. If they had ended up in that arrangement by chance they wouldn’t be representing anything. Why do you think neural arrangements are different?
I don’t think this is what I’m claiming. I’m not sure what you mean by the “subjective situations” of the human and the Boltzmann brain, but I don’t think I’m claiming that the subjective situations themselves are evidentially distinguishable. I don’t think I can point to some aspect of my phenomenal experience that proves I’m not a Boltzmann brain.
I’m claiming that the evidence available to me goes beyond my phenomenal experience, that one’s evidence isn’t fully determined by one’s “subjective situation”. Boltzmann brains and human beings have different evidence. Is this the view you regard as bizarre? It doesn’t seem all that different from the view expressed in the Sequences here.
Further, I think if two observers have vastly different sets of evidence, then it is permissibile to place them in separate reference classes when reasoning about certain anthropic problems.
I don’t assume this. I think some people are misunderstanding me. Others have expressed a position which I see as actually opposed to my position, so I’m pretty sure they have understood me. I think they’re wrong, though.
Your incorrect prediction about how I would respond to your question is an indication that you have at least partially misunderstood me. I suspect the number of people who have misunderstood me on this thread is explicable by a lack of clarity on my part.
I have, but it does not shift my credence enough to convince me I’m wrong. Does the fact that a majority of philosophers express agreement with externalism about mental content lead you to update your position somewhat? If you are unconvinced that I am accurately representing externalism, I encourage you to read the SEP article I linked and make up your own mind.
Not “the” external world. “An external world” and when I say external world I don’t mean a different, actually existing external world but a symbolic system that purports to represent an external world just as a humans brain contains a symbolic system (or something like that) that actually represents the external world.
The question here isn’t the relationship between the symbolic system/neural arrangement (call it S) and the system S purports to represent (call it R). The question is about the relation between S and the rest of the neural arrangement that produces phenomenal experience (call it P). If I understood exactly how that worked I would have solved the problem of consciousness. I have not done so. I’m okay with an externalism that simply says for S to count as an intentional state it must have a causal connection to R. But a position that a subject’s phenomenal experience supervenes not just on P and S but also on R is much more radical than typical externalism and would, absent a lot of explanation, imply that physicalism is false.
You’re not a phenomenal externalist, correct?
Ah, this might be the issue of contention. What makes phenomenal experience evidence of anything is that we have good reason to think it is causally entangled in an external world. But a Boltzmann brain would have exactly the same reasons. That is, an external, persistent, physical world is the best explanation of our sensory experiences and so we take our sensory experiences to tell us things about that world which let’s us predict and manipulate it. But the Boltzmann brain has exactly the same sensory experiences (and memories). It will make the same predictions (regarding future sensory data) and run the same experiments (except the Boltzmann brain’s will be ‘imaginary’ in a sense) which will return the same results (in terms of sensory experiences).
I don’t really want this to be about the definition of evidence. But surely having different sets of evidence implies that a perfect Bayesian reasoner wouldn’t return the same updates and credences for both sets!
No, I’m not, you’ll be glad to hear. There are limits even to my lunacy. I was just objecting to your characterization of the BB’s brain state as a representation. I’m not even all that happy with calling it a purported representation. If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who’s doing the purporting? Is it sufficient that some system could be used as a representation for it to count as a purported representation? In that case, everything is a purported representation.
I think there’s a tendency to assume our mental representations somehow have intrinsic representational properties that we wouldn’t attribute to other external representations. This is probably because phenomenal representation seems so immediate. If a Boltzmann brain’s visual system were in the same state mine is in when I see my mother, then maybe the brain isn’t visually representing my mother, but surely it is representing a woman, or at least something. Well, no, I don’t think so. If a physical system that is atom-for-atom identical to a photograph of my mother congealed out of a high entropy soup it would not be a representation of my mother. It wouldn’t be a representation at all, and not even a purported one.
First, the Boltzmann brain and I do not return the same updates. The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
Second, I disagree with your claim that perfect Bayesian reasoners would return different updates for different sets of evidence. I see no reason to believe this is true. As long as the likelihood ratios (and priors) are the same, the updates will be the same, but likelihood ratios aren’t unique to particular pieces of evidence. As an example, suppose a hypothesis H predicts a 30% chance of observing a piece of evidence E1, and the chance of observing that evidence if H had been false is 10%. It seems to me entirely possible that there is a totally different piece of evidence, E2, which H also predicts has a 30% chance of being observed, and ~H predicts has a 10% chance of being observed. A Bayesian reasoner who updated on E1 would return the same credence as one who updated on E2, even though E1 and E2 are different. None of this seems particularly controversial. Am I misunderstanding your claim?
Yes, but that’s my fault. Let’s put it this way. A set of evidence is indistinguishable from another set of evidence if and only if an ideal Bayesian reasoner can update on either and then, update not at all after learning the other set.
That’s not the issue. Neither you nor your Boltzmann brain copy is an ideal Bayesian reasoner. The question is: what happens when you feed your evidence to an ideal Bayesian reasoner and then feed the Boltzmann brain’s evidence. Will the ideal Bayesian reasoner find anything new to update on? What if you reverse the process and feed the Boltzmann brain’s evidence first? Will the ideal Bayesian reasoner update then?
The pancomputation issue is tricky but has nothing to do with this. By stipulation Boltzmann brains are physically similar enough to humans to make computations that produce exactly similar brain states. Moreover, you say you are not a phenomenal externalist so the computations made by Boltzmann brains apparently produce exactly similar phenomenal experiences. Pancomputation isn’t any more of a problem for me than it is for you.
Perhaps this is just going to end up being a reductio on externalism.
The Boltzmann brain, obviously. Are you denying that a Boltzmann brain can have any intentional states. I.e. Can it believe things about it’s phenomenal experience, qualia, or other mental states. Can’t it believe it believes something?
Well, the simpler part of this is that representation is a three-place predicate: system A represents system B to observer C1, which does not imply that A represents B to C2, nor does it prevent A from representing B2 to C2. (Nor, indeed, to C1.)
So, yes, a random salt-and-pepper-shaker arrangement might represent any number of things to any number of observers.
A purported representation is presumably some system A about which the claim is made (by anyone capable of making claims) that there exists a (B, C) pair such that A represents B to C.
But there’s a deeper disconnect here having to do with what it means for A to represent B to C in the first place, which we’ve discussed elsethread.
Sure. And if I had a brain that could in fact treat all theoretically possible isomorphisms as salient at one time, I would indeed treat every physical system as performing every computation, and also as representing every other physical system. In fact, though, I lack such a brain; what my brain actually does is treat a vanishingly small fraction of theoretically possible isomorphisms as salient, and I am therefore restricted to only treating certain systems as performing certain computations and as representing certain other systems.
You know about your concept of Obama. You have memories of sensations which seem to validate parts of this concept. But you do not know that your world contains an object matching the concept.
You don’t think I know that Obama exists (out there, in the world, not in my head)? It sounds like you’re using the word “knowledge” very differently from the way it’s ordinarily used. According to you, can we know anything about the world outside our heads?
Well, sure, people say they “know” all sorts of things that they don’t actually know. It would be formidably difficult to speak and write in a way that constantly acknowledges the layered uncertainty actually present in the situation. Celia Green says the uncertainty is total, which isn’t literally true, but it’s close to the truth.
Experience consists of an ongoing collision between belief and reality, and reality is that I don’t know what will happen even one second from now, I don’t know the true causes of my sensations, and so on—though I may have beliefs about these matters. My knowledge is a small island in an ocean of pragmatic belief, and mostly concerns transient superficial sensory facts, matters known by definition and deduction, perhaps some especially vivid memories tying together sensation and concept, and a very slowly growing core of ontological facts obtained by phenomenological reflection, such as the existence of time, thought, sensation, etc. Procedural knowledge also deserves a separate mention, though it is essentially a matter of knowing how to try to do something; success is not assured.
We’re radical skeptics to the point of seriously considering that you may be a Boltzmann brain. And that’s what you think that would mean. (I also disagree with the way you use certain words, but others have said that.)
Maybe you are, but I’m not. Nothing in my argument (or the Boltzmann brain argument in general) requires one to seriously entertain the possibility that one is a Boltzmann brain.
Well, that’s what you wanted to convince people of. And your argument in the OP is wrong, and others have explained why (incorrect word usage and word games).
There are other, better arguments: for example I’m as simple an observer as I might have been, which argues strongly that I’m not a chance fluctuation. It’s legitimate to take such arguments as evidence against big-universe theories where BBs flourish. But if other data suggests a big universe, then there’s still an unanswered question to be resolved.
Just to clarify: Is the word “belief” one of the words you think I’m using incorrectly? And do you think that the incorrect usage is responsible for me saying things like “Boltzmann brains couldn’t have beliefs about Obama”? Relatedly, do you think Boltzmann brains could in fact have beliefs about Obama?
I’m not trying to get into an argument about this here. I just want a sense of what people think about this. I might defend my claims about belief in a separate post later.
I can’t speak for Dan, of course, but for my own part: I think this whole discussion has gotten muddled by failing to distinguish clearly enough between claims about the world and claims about language.
I’m not exactly sure what you or Dan mean by “incorrect word usage” here so I can’t easily answer your first question, but I think the distinction you draw between beliefs and brain-states-that-could-be-beliefs-if-they-had-intentional-content-but-since-they-don’t-aren’t is not an important distinction, and using the label “belief” to describe the former but not the latter is not a lexical choice I endorse.
I think that lexical choice is responsible for you saying things like “Boltzmann brains couldn’t have beliefs about Obama.”
I think Boltzmann brains can enter brain-states which correspond to the brain-states that you would call “beliefs about Obama” were I to enter them, and I consider that correspondence strong enough that I see no justification to not also call the BB’s brain-states “beliefs about Obama.”
As far as I can tell, you and I agree about all of this except for what things in the world the word “belief” properly labels.
Do you feel the same way about the word “evidence”? Do you feel comfortable saying that an observer can have evidence regarding the state of some external system even if its brain state is not appropriately causally entangled with that system?
I obviously agree with you that how we use “belief” and “evidence” is a lexical choice. But I think it is a lexical choice with important consequences. Using these words in an internalist manner generally indicates (perhaps even encourages) a failure to recognize the importance of distinguishing between syntax and semantics, a failure I think has been responsible for a lot of confused philosophical thinking. But this is a subject for another post.
This gets difficult, because there’s a whole set of related terms I suspect we aren’t quite using the same way, so there’s a lot of underbrush that needs to get cleared to make clear communication possible.
When I’m trying to be precise, I talk about experiences providing evidence which constrains expectations of future experiences. That said, in practice I do also treat clusters of experience that demonstrate persistent patterns of correlation as evidence of the state of external systems, though I mostly think of that sort of talk as kinda sloppy shorthand for an otherwise too-tedious-to-talk-about set of predicted experiences.
So I feel reasonably comfortable saying that an experience E1 can serve as evidence of an external system S1. Even if I don’t actually believe that S1 exists, I’m still reasonably comfortable saying that E1 is evidence of S1. (E.g., being told that Santa Claus exists is evidence of the existence of Santa Claus, even if it turns out everyone is lying.)
If I have a whole cluster of experiences E1...En, all of which reinforce one another and reinforce my inference of S1, and I don’t have any experiences which serve as evidence that S1 doesn’t exist, I start to have compelling evidence of S1 and my confidence in S1 increases. All of this can occur even if it turns out that S1 doesn’t actually exist. And, of course, some other system S2 can exist without my having any inkling of it. This is all fairly unproblematic.
So, moving on to the condition you’re describing, where E1 causes me to infer the existence of S1, and S1 actually does exist, but S1 is not causally entangled with E1. I find it simpler to think about a similar condition where there exist two external systems, S1 and S2, such that S2 causes E1 and on the basis of E1 I infer the existence of S1, while remaining ignorant of S2. For example, I believe Alice is my birth mother, but in fact Alice (S1) and my birth mother (S2) are separate people. My birth mother sends me an anonymous email (E1) saying “I am your birth mother, and I have cancer.” I infer that Alice has cancer. It turns out that Alice does have cancer, but that this had no causal relationship with the email being sent.
I am comfortable in such an arrangement saying that E1 is evidence that S1 has cancer, even though E1 is not causally entangled with S1′s cancer.
Further, when discussing such an arrangement, I can say that the brain-states caused by E1 are about S1 or about S2 or about both or neither, and it’s not at all clear to me what if anything depends on which of those lexical choices I make. Mostly, I think asking what E1 is really “about” is a wrong question; if it is really about anything it’s about the entire conjoined state of the universe, including both S1 and S2 and everything else, but really who cares?
And if instead there is no S2, and E1 just spontaneously comes into existence, the situation is basically the same as the above, it’s just harder for me to come up with plausible examples.
Perhaps it would help to introduce a distinction here. Let’s distinguish internal evidence and external evidence. P1 counts as internal evidence for P2 if it is procedurally rational for me to alter my credence in P2 once I come to accept P1, given my background knowledge. P1 is external evidence for P2 if the truth of P1 genuinely counterfactually depends on the truth of P2. That is, P1 would be false (or less frequently true, if we’re dealing with statistical claims) if P2 were false. A proposition can be internal evidence without being external evidence. In your anonymous letter example, the letter is internal evidence but not external evidence.
Which conception of evidence is the right one to use will probably depend on context. When we are attempting to describe an individual’s epistemic status—the amount of reliable information they possess about the world—then it seems that external evidence is the relevant variety of evidence to consider. And if two observers differ substantially in the external evidence available to them, it seems justifiable to place them in separate reference classes for certain anthropic explanations. Going back to an early example of Eliezer’s:
If the superintelligence were engaging in anthropic reasoning, should it put itself in the same reference class as the mental patients in all cases? If we think identical (or similar) internal evidence requires that they be in the same reference class, then I think the answer may be yes. But I think the answer is fairly obviously no, and this is because of the vast difference in the epistemic situations of the superintelligence and the mental patients, a difference attributable to differences in external evidence.
I accept your working definitions for “internal evidence” and “external evidence.”
I want to be a little careful about the words “epistemic status” and “reliable information,” because a lot of confusion can be introduced through the use of terms that abstract.
I remember reading once that courtship behavior in robins is triggered by the visual stimulus of a patch of red taller than it is wide. I have no idea if this is actually true, but suppose it is. The idea was that the ancestral robin environment didn’t contain other stimuli like that other than female robins in estrus, so it was a reliable piece of evidence to use at the time. Now, of course, there are lots of visual stimuli in that category, so you get robins initiating courtship displays at red socks on clotheslines and at Coke cans.
So, OK. Given that, and using your terms, and assuming it makes any sense to describe what a robin does here as updating on evidence at all, then a vertical red swatch is always internal evidence of a fertile female, and it was external evidence a million years ago (when it “genuinely” counterfactually depended on the presence of such a female) but it is not now. If we put some robins in an environment from which we eliminate all other red things, it would be external evidence again. (Yes?)
If what I am interested in is whether a given robin is correct about whether it’s in the presence of a fertile female, external evidence is the relevant variety of information to consider.
If what I am interested in is what conclusions the robin will actually reach about whether it’s in the presence of a fertile female, internal evidence is the relevant variety of information to consider.
If that is consistent with your claim about the robin’s epistemic status and about the amount of reliable information the robin possesses about the world, then great, I’m with you so far. (If not, this is perhaps a good place to back up and see where we diverged.)
Sure, when available external evidence is particularly relevant to those anthropic explanations.
So A and B both believe they’re superintelligences. As it happens, A is in fact a SI, and B is in fact a mental patient. And the question is, should A consider itself in the same reference class as B. Yes?
Absolutely agreed. I don’t endorse any decision theory that results in A concluding that it’s more likely to be a mental patient than a SI in a typical situation like this, and this is precisely because of the nature of the information available to A in such a situation.
Wait, what?
Why in the world would A and B have similar internal evidence?
I mean, in any normal environment, if A is a superintelligence and B is a mental patient, I would expect A to have loads of information on the basis of which it is procedurally rational for A to conclude that A is in a different reference class than B. Which is internal evidence, on your account. No?
But, OK. If I assume that A and B do have similar internal evidence… huh. Well, that implicitly assumes that A is in a pathologically twisted epistemic environment. I have trouble imagining such an environment, but the world is more complex than I can imagine. So, OK, sure, I can assume such an environment, in a suitably hand-waving sort of way.
And sure, I agree with you: in such an environment, A should consider itself in the same reference class as B. A is mistaken, of course, which is no surprise given that it’s in such an epistemically tainted environment.
Now, I suppose one might say something like “Sure, A is justified in doing so, but A should not do so, because A should not believe falsehoods.” Which would reveal a disconnect relating to the word “should,” in addition to everything else. (When I say that A should believe falsehoods in this situation, I mean I endorse the decision procedure that leads to doing so, not that I endorse the result.)
But we at least ought to agree, given your word usage, that it is procedurally rational for A to conclude that it’s in the same reference class as B in such a tainted environment, even though that isn’t true. Yes?
Yes, yes, and yes—in the usual meaning of “belief”. There are different-but-related meanings which are sometimes used, but the way you use it is completely unlike the usual meanings.
More importantly, your state that a BB can’t have “beliefs” in your sense, which is a (re)definition—that merely makes your words unclear and misunderstood—but then you conclude that because you have “beliefs” you are not a BB. This is simply wrong, even using your own definition of “belief”—because under your definition, having “real beliefs” is not a measurable fact of someone’s brain in reality, and so you can never make conclusions like “I have real beliefs” or “I am not a BB” based on your own brain state. (And all of our conclusions are based on our brain states.)
IOW: a BB similar to yourself, would reach the same conclusions as you—that it is not a BB—but it would be wrong. However, it would be reasoning from the exact same evidence as you. Therefore, your reasoning is faulty.
I disagree that it would be reasoning from the exact same evidence as me. I’m an externalist about evidence too, not just about belief.
Again, you’re using the word “evidence” differently from everyone else. This only serves to confuse the discussion.
Tabooing “evidence”, what I was saying is that a BB would have the same initial brain-state (what I termed “evidence”) and therefore would achieve the same final brain-state (what I termed “conclusions”). The laws of physics for its brain-state evolution, and the physical causality between the two states, are the same as for your brain. This is trivially so by the very definition of a BB that is sufficiently similar to your brain.
I don’t know what you mean by “externalist evidence” and I don’t see how it would matter. The considerations that apply here are exactly the same as in Eliezer’s discussion of p-zombies. Imagine a BB which is a slightly larger fluctuation than a mere brain; it is a fluctuation of a whole body, which can live for a few seconds, and can speak and think in that time. It would think and say “I am conscious” for the same reasons as you do; therefore it is not a p-zombie. It would think and say “Barack Obama exists” for the same reasons as you do; therefore what everyone-but-you calls its knowledge and its beliefs about “Barack Obama”, are of the same kind as yours.
Wait, would an equivalent way to put it be evidential as in “as viewed by an outside observer” as opposed to “from the inside” (the perspective of a Boltzmann brain)?