I’m not so sure about that. An hour-long Boltzmann brain requires an hour of coincidences; a one-off coincidence that produces a habitable environment (maybe not a full Earth, but something that lasts a few hours) seems much more likely.
Sure. I am using “Boltzmann brain” as shorthand for a person who has the same memories as me, but was actually created out of fluctuations in a high entropy, long-lived universe and merely has my memories by coincidence. The most likely way for such a person to have experiences for an hour is probably for them to be connected to some kind of coincidental simulation device, with a coincidental hour of tolerable environment around that simulation.
Just wanting to second what Charlie says here. As best as I can tell the decision-theoretic move made in the Boltzmann Brains section doesn’t work; Neal’s FNC has the result that (a) we become extremely confident that we are boltzmann brains, and (b) we end up having an extremely high time and space discount rate at first approximation and at second approximation we end up acting like solipsists as well, i.e. live in the moment, care only about yourself, etc. This is true even if you are standing in front of a button that would save 10^40 happy human lives via colonizing the light-cone. Because a low-entropy region the size of the light cone is unbelievably less common than a low-entropy region the size of a matrix-simulation pod.
I’m not so sure about that. An hour-long Boltzmann brain requires an hour of coincidences; a one-off coincidence that produces a habitable environment (maybe not a full Earth, but something that lasts a few hours) seems much more likely.
Sure. I am using “Boltzmann brain” as shorthand for a person who has the same memories as me, but was actually created out of fluctuations in a high entropy, long-lived universe and merely has my memories by coincidence. The most likely way for such a person to have experiences for an hour is probably for them to be connected to some kind of coincidental simulation device, with a coincidental hour of tolerable environment around that simulation.
Just wanting to second what Charlie says here. As best as I can tell the decision-theoretic move made in the Boltzmann Brains section doesn’t work; Neal’s FNC has the result that (a) we become extremely confident that we are boltzmann brains, and (b) we end up having an extremely high time and space discount rate at first approximation and at second approximation we end up acting like solipsists as well, i.e. live in the moment, care only about yourself, etc. This is true even if you are standing in front of a button that would save 10^40 happy human lives via colonizing the light-cone. Because a low-entropy region the size of the light cone is unbelievably less common than a low-entropy region the size of a matrix-simulation pod.