My question is why ever exclude a conscious observer from your reference class? You’re reference class is basically an assumption you make about who you are. Obviously, you have to be conscious, but why assume you’re not a Boltzmann brain? If they exist, and one of them. A Boltzmann brain that uses your logic would exclude itself from its reference class, and therefore conclude that it cannot be itself. It would be infinitely wrong. This would indicate that the logic is faulty.
There is no appropriate causal connection between Obama and that brain, so how could its beliefs be about him?
That’s just how you’re defining belief. If the brain can’t tell, it’s not evidence, and therefore irrelevant.
One way to see the difference between my representational states and the Boltzmann brains’ is to think counterfactually. If Barack Obama had lost the election in 2008, my current brain state would have been different in (at least partially) predictable ways. I would no longer have the belief that he was President, for instance. The Boltzmann brain’s brain states don’t possess this counterfactual dependency. Doesn’t this suggest an epistemic difference between me and the Boltzmann brain?
That’s just how you’re defining belief. If the brain can’t tell, it’s not evidence, and therefore irrelevant.
I don’t think this is a mere definitional matter. If I have evidence, it must correspond to some contentful representation I possess. Evidence is about stuff out there in the world, it has content. And it’s not just definitional to say that representations don’t acquire content magically. The contentfulness of a representation must be attributable to some physical process linking the content of the representation to the physical medium of the representation. If a piece of paper spontaneously congealed out of a high entropy soup bearing the inscription “BARACK OBAMA”, would you say it was referring to the President? What if the same inscription were typed by a reporter who had just interviewed the President?
Recognizing that representation depends on physical relationships between the object (or state of affairs) represented and the system doing the representing seems to me to be crucial to fully embracing naturalism. It’s not just a semantic issue (well, actually, it is just a semantic issue, in that its an issue about semantics, but you get what I mean).
And I don’t know what you mean when you say “If the brain can’t tell...”. Not only does the Boltzmann brain lack the information that Barack Obama is President, it cannot even form the judgment that it possesses this information, since that would presuppose that it can represent the content of the belief. So in this case, I guess my brain can tell that I have the relevant evidence, and the Boltzmann brain cannot, even though they are in the same state. Or did you mean something about identical phenomenal experience by “the brain can’t tell...”? That just begs the question.
A Boltzmann brain that uses your logic would exclude itself from its reference class, and therefore conclude that it cannot be itself. It would be infinitely wrong. This would indicate that the logic is faulty.
The Boltzmann brain would not be using my logic. In my post, I refer to a number of things to which a Boltzmann brain could not refer, such as Boltzmann. I doubt that one could even call the brain states of a Boltzmann brain genuinely representational, so the claim that it is engaged in reasoning is itself questionable. I am reminded here of arguments against pancomputationalism. A Boltzmann brain isn’t reasoning about cosmology for the same sort of reason that a rock isn’t playing tic-tac-toe. The existence of an isomorphism between it and some system that is reasoning about cosmology (or playing tic-tac-toe) is insufficient.
I’m pretty sure identical brain states feel the same from the inside. I’m not sure that it feels like anything in particular to have a belief. What do you think about what I say in this comment?
My question is why ever exclude a conscious observer from your reference class? You’re reference class is basically an assumption you make about who you are. Obviously, you have to be conscious, but why assume you’re not a Boltzmann brain? If they exist, and one of them. A Boltzmann brain that uses your logic would exclude itself from its reference class, and therefore conclude that it cannot be itself. It would be infinitely wrong. This would indicate that the logic is faulty.
That’s just how you’re defining belief. If the brain can’t tell, it’s not evidence, and therefore irrelevant.
One way to see the difference between my representational states and the Boltzmann brains’ is to think counterfactually. If Barack Obama had lost the election in 2008, my current brain state would have been different in (at least partially) predictable ways. I would no longer have the belief that he was President, for instance. The Boltzmann brain’s brain states don’t possess this counterfactual dependency. Doesn’t this suggest an epistemic difference between me and the Boltzmann brain?
I don’t think this is a mere definitional matter. If I have evidence, it must correspond to some contentful representation I possess. Evidence is about stuff out there in the world, it has content. And it’s not just definitional to say that representations don’t acquire content magically. The contentfulness of a representation must be attributable to some physical process linking the content of the representation to the physical medium of the representation. If a piece of paper spontaneously congealed out of a high entropy soup bearing the inscription “BARACK OBAMA”, would you say it was referring to the President? What if the same inscription were typed by a reporter who had just interviewed the President?
Recognizing that representation depends on physical relationships between the object (or state of affairs) represented and the system doing the representing seems to me to be crucial to fully embracing naturalism. It’s not just a semantic issue (well, actually, it is just a semantic issue, in that its an issue about semantics, but you get what I mean).
And I don’t know what you mean when you say “If the brain can’t tell...”. Not only does the Boltzmann brain lack the information that Barack Obama is President, it cannot even form the judgment that it possesses this information, since that would presuppose that it can represent the content of the belief. So in this case, I guess my brain can tell that I have the relevant evidence, and the Boltzmann brain cannot, even though they are in the same state. Or did you mean something about identical phenomenal experience by “the brain can’t tell...”? That just begs the question.
The Boltzmann brain would not be using my logic. In my post, I refer to a number of things to which a Boltzmann brain could not refer, such as Boltzmann. I doubt that one could even call the brain states of a Boltzmann brain genuinely representational, so the claim that it is engaged in reasoning is itself questionable. I am reminded here of arguments against pancomputationalism. A Boltzmann brain isn’t reasoning about cosmology for the same sort of reason that a rock isn’t playing tic-tac-toe. The existence of an isomorphism between it and some system that is reasoning about cosmology (or playing tic-tac-toe) is insufficient.
Do beliefs feel differently from the inside if they are internally identical, but don’t correspond to the same outside world?
I’m pretty sure identical brain states feel the same from the inside. I’m not sure that it feels like anything in particular to have a belief. What do you think about what I say in this comment?