Yes, but that’s my fault. Let’s put it this way. A set of evidence is indistinguishable from another set of evidence if and only if an ideal Bayesian reasoner can update on either and then, update not at all after learning the other set.
First, the Boltzmann brain and I do not return the same updates.
That’s not the issue. Neither you nor your Boltzmann brain copy is an ideal Bayesian reasoner. The question is: what happens when you feed your evidence to an ideal Bayesian reasoner and then feed the Boltzmann brain’s evidence. Will the ideal Bayesian reasoner find anything new to update on? What if you reverse the process and feed the Boltzmann brain’s evidence first? Will the ideal Bayesian reasoner update then?
The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
The pancomputation issue is tricky but has nothing to do with this. By stipulation Boltzmann brains are physically similar enough to humans to make computations that produce exactly similar brain states. Moreover, you say you are not a phenomenal externalist so the computations made by Boltzmann brains apparently produce exactly similar phenomenal experiences. Pancomputation isn’t any more of a problem for me than it is for you.
Perhaps this is just going to end up being a reductio on externalism.
Who’s doing the purporting?
The Boltzmann brain, obviously. Are you denying that a Boltzmann brain can have any intentional states. I.e. Can it believe things about it’s phenomenal experience, qualia, or other mental states. Can’t it believe it believes something?
Yes, but that’s my fault. Let’s put it this way. A set of evidence is indistinguishable from another set of evidence if and only if an ideal Bayesian reasoner can update on either and then, update not at all after learning the other set.
That’s not the issue. Neither you nor your Boltzmann brain copy is an ideal Bayesian reasoner. The question is: what happens when you feed your evidence to an ideal Bayesian reasoner and then feed the Boltzmann brain’s evidence. Will the ideal Bayesian reasoner find anything new to update on? What if you reverse the process and feed the Boltzmann brain’s evidence first? Will the ideal Bayesian reasoner update then?
The pancomputation issue is tricky but has nothing to do with this. By stipulation Boltzmann brains are physically similar enough to humans to make computations that produce exactly similar brain states. Moreover, you say you are not a phenomenal externalist so the computations made by Boltzmann brains apparently produce exactly similar phenomenal experiences. Pancomputation isn’t any more of a problem for me than it is for you.
Perhaps this is just going to end up being a reductio on externalism.
The Boltzmann brain, obviously. Are you denying that a Boltzmann brain can have any intentional states. I.e. Can it believe things about it’s phenomenal experience, qualia, or other mental states. Can’t it believe it believes something?