It may be that most minds with your thoughts do in fact disappear after an instant. Of course if that is the case there will be vastly more with chaotic or jumbled thoughts. But the fact that we observe order is no evidence against the existence of additional minds observing chaos, unless you don’t accept self-indication.
So, your experience of order is not good evidence for your belief that more of you are non-Boltzmann than Boltzmann. But as I said, in the long term your expected accuracy will rise if you commit to not believing you are a Boltzmann brain, even if you believe that you most likely are one now.
A somewhat analogous situation may arise in AGI—AI makers can rule out certain things (e.g. the AI is simulated in a way that the simulated makers are non-conscious) that the AI cannot. Thus by having the AI rule such things out a priori, the makers can improve the AI’s beliefs in ways that the AI itself, however superintelligent, rationally could not.
It may be that most minds with your thoughts do in fact disappear after an instant. Of course if that is the case there will be vastly more with chaotic or jumbled thoughts. But the fact that we observe order is no evidence against the existence of additional minds observing chaos, unless you don’t accept self-indication.
So, your experience of order is not good evidence for your belief that more of you are non-Boltzmann than Boltzmann. But as I said, in the long term your expected accuracy will rise if you commit to not believing you are a Boltzmann brain, even if you believe that you most likely are one now.
A somewhat analogous situation may arise in AGI—AI makers can rule out certain things (e.g. the AI is simulated in a way that the simulated makers are non-conscious) that the AI cannot. Thus by having the AI rule such things out a priori, the makers can improve the AI’s beliefs in ways that the AI itself, however superintelligent, rationally could not.