Why should the fact that Boltzmann brains could go through an identical neural process convince me that the reasoning instantiated by me going through the neural process is wrong?
The conclusions of this reasoning, when it’s performed by you, are not wrong, but they are wrong when the same reasoning is performed by a Boltzmann brain. In this sense, the process of reasoning is invalid, it doesn’t produce correct conclusions in all circumstances, and that makes it somewhat unsatisfactory, but of course it works well for the class of instantiations that doesn’t include Boltzmann brains.
As a less loaded model of some of the aspects of the problem, consider two atom-by-atom identical copies of a person who are given identical-looking closed boxes, with one box containing a red glove, and another a green glove. If the green-glove copy for some reason decides that the box it’s seeing contains a green glove, then that copy is right. At the same time, if the green-glove copy so decides, then since the copies are identical, the red-glove copy will also decide that its box contains a green glove, and it will be wrong. Since evidence about the content of the boxes is not available to the copies, deciding either way is in some sense incorrect reasoning, even if it happens to produce a correct belief in one of the reasoners, at the cost of producing an incorrect belief in the other.
OK, that’s a good example. Let’s say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones, which sends certain signals to the optic nerve and so on. In the case of the red-glove copy, a thermodynamic fluctuation occurs that leads it to go through the exact same physical process. That is, the fluctuation makes the cones react just as if they had interacted with green photons, and the downstream process is exactly the same. In this case, you’d want to say both duplicates have unjustified beliefs? The green-glove duplicate arrived at its belief through a reliable process, the red-glove duplicate didn’t. I just don’t see why our conclusion about the justification has to be the same across both copies. Even if I bought this constraint, I’d want to say that both of their beliefs are in fact justified. The red-glove one’s belief is false, but false beliefs can be justified. The red-glove copy just got really unlucky.
Let’s say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones
In my example, the gloves are not observed, the boxes are closed, the states of the brains of both copies and the nerve impulses they generate, and words they say will all be by construction identical during the thought experiment.
(See also the edit to the grandparent comment, it could be the case that we already agree.)
Whoops, missed that bit. Of course, if either copy is forming a judgment about the glove’s color without actual empirical contact with the glove, then its belief is unjustified. I don’t think the identity of the copies is relevant to our judgment in this case. What would you say about the example I gave, where the box is open and the green-glove copy actually sees the glove. By hypothesis, the brains of both copies remain physically identical throughout the process. In this case, do you think we should judge that there is something problematic about the green-glove copy’s judgment that the glove is green? This case seems far more analogous to a situation involving a human and a Boltzmann brain.
ETA: OK, I just saw the edit. We’re closer to agreement than I thought, but I still don’t get the “unsatisfactory” part. In the example I gave, I don’t think there’s anything unsatisfactory about the green-glove copy’s belief formation mechanism. It’s a paradigm example of forming a belief through a reliable process.
The sense in which your (correct) belief that you are not a Boltzmann brain is justified (or unjustified) seems to me analogous to the situation with the green-glove copy believing that its unobserved glove is green. Justification is a tricky thing: actually not being a Boltzmann brain, or actually being the green-glove copy could in some sense be said to justify the respective beliefs, without a need to rely on distinguishing evidence, but it’s not entirely clear to me how that works.
The conclusions of this reasoning, when it’s performed by you, are not wrong, but they are wrong when the same reasoning is performed by a Boltzmann brain. In this sense, the process of reasoning is invalid, it doesn’t produce correct conclusions in all circumstances, and that makes it somewhat unsatisfactory, but of course it works well for the class of instantiations that doesn’t include Boltzmann brains.
As a less loaded model of some of the aspects of the problem, consider two atom-by-atom identical copies of a person who are given identical-looking closed boxes, with one box containing a red glove, and another a green glove. If the green-glove copy for some reason decides that the box it’s seeing contains a green glove, then that copy is right. At the same time, if the green-glove copy so decides, then since the copies are identical, the red-glove copy will also decide that its box contains a green glove, and it will be wrong. Since evidence about the content of the boxes is not available to the copies, deciding either way is in some sense incorrect reasoning, even if it happens to produce a correct belief in one of the reasoners, at the cost of producing an incorrect belief in the other.
OK, that’s a good example. Let’s say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones, which sends certain signals to the optic nerve and so on. In the case of the red-glove copy, a thermodynamic fluctuation occurs that leads it to go through the exact same physical process. That is, the fluctuation makes the cones react just as if they had interacted with green photons, and the downstream process is exactly the same. In this case, you’d want to say both duplicates have unjustified beliefs? The green-glove duplicate arrived at its belief through a reliable process, the red-glove duplicate didn’t. I just don’t see why our conclusion about the justification has to be the same across both copies. Even if I bought this constraint, I’d want to say that both of their beliefs are in fact justified. The red-glove one’s belief is false, but false beliefs can be justified. The red-glove copy just got really unlucky.
In my example, the gloves are not observed, the boxes are closed, the states of the brains of both copies and the nerve impulses they generate, and words they say will all be by construction identical during the thought experiment.
(See also the edit to the grandparent comment, it could be the case that we already agree.)
Whoops, missed that bit. Of course, if either copy is forming a judgment about the glove’s color without actual empirical contact with the glove, then its belief is unjustified. I don’t think the identity of the copies is relevant to our judgment in this case. What would you say about the example I gave, where the box is open and the green-glove copy actually sees the glove. By hypothesis, the brains of both copies remain physically identical throughout the process. In this case, do you think we should judge that there is something problematic about the green-glove copy’s judgment that the glove is green? This case seems far more analogous to a situation involving a human and a Boltzmann brain.
ETA: OK, I just saw the edit. We’re closer to agreement than I thought, but I still don’t get the “unsatisfactory” part. In the example I gave, I don’t think there’s anything unsatisfactory about the green-glove copy’s belief formation mechanism. It’s a paradigm example of forming a belief through a reliable process.
The sense in which your (correct) belief that you are not a Boltzmann brain is justified (or unjustified) seems to me analogous to the situation with the green-glove copy believing that its unobserved glove is green. Justification is a tricky thing: actually not being a Boltzmann brain, or actually being the green-glove copy could in some sense be said to justify the respective beliefs, without a need to rely on distinguishing evidence, but it’s not entirely clear to me how that works.