No, I’m not, you’ll be glad to hear. There are limits even to my lunacy. I was just objecting to your characterization of the BB’s brain state as a representation. I’m not even all that happy with calling it a purported representation. If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who’s doing the purporting? Is it sufficient that some system could be used as a representation for it to count as a purported representation? In that case, everything is a purported representation.
I think there’s a tendency to assume our mental representations somehow have intrinsic representational properties that we wouldn’t attribute to other external representations. This is probably because phenomenal representation seems so immediate. If a Boltzmann brain’s visual system were in the same state mine is in when I see my mother, then maybe the brain isn’t visually representing my mother, but surely it is representing a woman, or at least something. Well, no, I don’t think so. If a physical system that is atom-for-atom identical to a photograph of my mother congealed out of a high entropy soup it would not be a representation of my mother. It wouldn’t be a representation at all, and not even a purported one.
But surely having different sets of evidence implies that a perfect Bayesian reasoner wouldn’t return the same updates and credences for both sets!
First, the Boltzmann brain and I do not return the same updates. The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
Second, I disagree with your claim that perfect Bayesian reasoners would return different updates for different sets of evidence. I see no reason to believe this is true. As long as the likelihood ratios (and priors) are the same, the updates will be the same, but likelihood ratios aren’t unique to particular pieces of evidence. As an example, suppose a hypothesis H predicts a 30% chance of observing a piece of evidence E1, and the chance of observing that evidence if H had been false is 10%. It seems to me entirely possible that there is a totally different piece of evidence, E2, which H also predicts has a 30% chance of being observed, and ~H predicts has a 10% chance of being observed. A Bayesian reasoner who updated on E1 would return the same credence as one who updated on E2, even though E1 and E2 are different. None of this seems particularly controversial. Am I misunderstanding your claim?
Yes, but that’s my fault. Let’s put it this way. A set of evidence is indistinguishable from another set of evidence if and only if an ideal Bayesian reasoner can update on either and then, update not at all after learning the other set.
First, the Boltzmann brain and I do not return the same updates.
That’s not the issue. Neither you nor your Boltzmann brain copy is an ideal Bayesian reasoner. The question is: what happens when you feed your evidence to an ideal Bayesian reasoner and then feed the Boltzmann brain’s evidence. Will the ideal Bayesian reasoner find anything new to update on? What if you reverse the process and feed the Boltzmann brain’s evidence first? Will the ideal Bayesian reasoner update then?
The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
The pancomputation issue is tricky but has nothing to do with this. By stipulation Boltzmann brains are physically similar enough to humans to make computations that produce exactly similar brain states. Moreover, you say you are not a phenomenal externalist so the computations made by Boltzmann brains apparently produce exactly similar phenomenal experiences. Pancomputation isn’t any more of a problem for me than it is for you.
Perhaps this is just going to end up being a reductio on externalism.
Who’s doing the purporting?
The Boltzmann brain, obviously. Are you denying that a Boltzmann brain can have any intentional states. I.e. Can it believe things about it’s phenomenal experience, qualia, or other mental states. Can’t it believe it believes something?
If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who’s doing the purporting?
Well, the simpler part of this is that representation is a three-place predicate: system A represents system B to observer C1, which does not imply that A represents B to C2, nor does it prevent A from representing B2 to C2. (Nor, indeed, to C1.)
So, yes, a random salt-and-pepper-shaker arrangement might represent any number of things to any number of observers.
A purported representation is presumably some system A about which the claim is made (by anyone capable of making claims) that there exists a (B, C) pair such that A represents B to C.
But there’s a deeper disconnect here having to do with what it means for A to represent B to C in the first place, which we’ve discussed elsethread.
Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
Sure. And if I had a brain that could in fact treat all theoretically possible isomorphisms as salient at one time, I would indeed treat every physical system as performing every computation, and also as representing every other physical system. In fact, though, I lack such a brain; what my brain actually does is treat a vanishingly small fraction of theoretically possible isomorphisms as salient, and I am therefore restricted to only treating certain systems as performing certain computations and as representing certain other systems.
No, I’m not, you’ll be glad to hear. There are limits even to my lunacy. I was just objecting to your characterization of the BB’s brain state as a representation. I’m not even all that happy with calling it a purported representation. If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who’s doing the purporting? Is it sufficient that some system could be used as a representation for it to count as a purported representation? In that case, everything is a purported representation.
I think there’s a tendency to assume our mental representations somehow have intrinsic representational properties that we wouldn’t attribute to other external representations. This is probably because phenomenal representation seems so immediate. If a Boltzmann brain’s visual system were in the same state mine is in when I see my mother, then maybe the brain isn’t visually representing my mother, but surely it is representing a woman, or at least something. Well, no, I don’t think so. If a physical system that is atom-for-atom identical to a photograph of my mother congealed out of a high entropy soup it would not be a representation of my mother. It wouldn’t be a representation at all, and not even a purported one.
First, the Boltzmann brain and I do not return the same updates. The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn’t even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.
Second, I disagree with your claim that perfect Bayesian reasoners would return different updates for different sets of evidence. I see no reason to believe this is true. As long as the likelihood ratios (and priors) are the same, the updates will be the same, but likelihood ratios aren’t unique to particular pieces of evidence. As an example, suppose a hypothesis H predicts a 30% chance of observing a piece of evidence E1, and the chance of observing that evidence if H had been false is 10%. It seems to me entirely possible that there is a totally different piece of evidence, E2, which H also predicts has a 30% chance of being observed, and ~H predicts has a 10% chance of being observed. A Bayesian reasoner who updated on E1 would return the same credence as one who updated on E2, even though E1 and E2 are different. None of this seems particularly controversial. Am I misunderstanding your claim?
Yes, but that’s my fault. Let’s put it this way. A set of evidence is indistinguishable from another set of evidence if and only if an ideal Bayesian reasoner can update on either and then, update not at all after learning the other set.
That’s not the issue. Neither you nor your Boltzmann brain copy is an ideal Bayesian reasoner. The question is: what happens when you feed your evidence to an ideal Bayesian reasoner and then feed the Boltzmann brain’s evidence. Will the ideal Bayesian reasoner find anything new to update on? What if you reverse the process and feed the Boltzmann brain’s evidence first? Will the ideal Bayesian reasoner update then?
The pancomputation issue is tricky but has nothing to do with this. By stipulation Boltzmann brains are physically similar enough to humans to make computations that produce exactly similar brain states. Moreover, you say you are not a phenomenal externalist so the computations made by Boltzmann brains apparently produce exactly similar phenomenal experiences. Pancomputation isn’t any more of a problem for me than it is for you.
Perhaps this is just going to end up being a reductio on externalism.
The Boltzmann brain, obviously. Are you denying that a Boltzmann brain can have any intentional states. I.e. Can it believe things about it’s phenomenal experience, qualia, or other mental states. Can’t it believe it believes something?
Well, the simpler part of this is that representation is a three-place predicate: system A represents system B to observer C1, which does not imply that A represents B to C2, nor does it prevent A from representing B2 to C2. (Nor, indeed, to C1.)
So, yes, a random salt-and-pepper-shaker arrangement might represent any number of things to any number of observers.
A purported representation is presumably some system A about which the claim is made (by anyone capable of making claims) that there exists a (B, C) pair such that A represents B to C.
But there’s a deeper disconnect here having to do with what it means for A to represent B to C in the first place, which we’ve discussed elsethread.
Sure. And if I had a brain that could in fact treat all theoretically possible isomorphisms as salient at one time, I would indeed treat every physical system as performing every computation, and also as representing every other physical system. In fact, though, I lack such a brain; what my brain actually does is treat a vanishingly small fraction of theoretically possible isomorphisms as salient, and I am therefore restricted to only treating certain systems as performing certain computations and as representing certain other systems.