If you believe you’re a Boltzmann brain, you shouldn’t even be asking the question of what you should do next because you believe that in the next microsecond you won’t exist. If you survive any longer than that, that would be extremely strong evidence that you’re not a Boltzmann brain, so conditional on you actually being able to make a choice of what to do in the next hour, it still makes sense to choose to lift weights.
In a truly max-entropy universe, the probability of being a boltzmann brain that survives for one hour is greater than the probability of being on Earth. High entropy is a weird place.
I’m not so sure about that. An hour-long Boltzmann brain requires an hour of coincidences; a one-off coincidence that produces a habitable environment (maybe not a full Earth, but something that lasts a few hours) seems much more likely.
Sure. I am using “Boltzmann brain” as shorthand for a person who has the same memories as me, but was actually created out of fluctuations in a high entropy, long-lived universe and merely has my memories by coincidence. The most likely way for such a person to have experiences for an hour is probably for them to be connected to some kind of coincidental simulation device, with a coincidental hour of tolerable environment around that simulation.
Just wanting to second what Charlie says here. As best as I can tell the decision-theoretic move made in the Boltzmann Brains section doesn’t work; Neal’s FNC has the result that (a) we become extremely confident that we are boltzmann brains, and (b) we end up having an extremely high time and space discount rate at first approximation and at second approximation we end up acting like solipsists as well, i.e. live in the moment, care only about yourself, etc. This is true even if you are standing in front of a button that would save 10^40 happy human lives via colonizing the light-cone. Because a low-entropy region the size of the light cone is unbelievably less common than a low-entropy region the size of a matrix-simulation pod.
If you believe you’re a Boltzmann brain, you shouldn’t even be asking the question of what you should do next because you believe that in the next microsecond you won’t exist. If you survive any longer than that, that would be extremely strong evidence that you’re not a Boltzmann brain, so conditional on you actually being able to make a choice of what to do in the next hour, it still makes sense to choose to lift weights.
In a truly max-entropy universe, the probability of being a boltzmann brain that survives for one hour is greater than the probability of being on Earth. High entropy is a weird place.
I’m not so sure about that. An hour-long Boltzmann brain requires an hour of coincidences; a one-off coincidence that produces a habitable environment (maybe not a full Earth, but something that lasts a few hours) seems much more likely.
Sure. I am using “Boltzmann brain” as shorthand for a person who has the same memories as me, but was actually created out of fluctuations in a high entropy, long-lived universe and merely has my memories by coincidence. The most likely way for such a person to have experiences for an hour is probably for them to be connected to some kind of coincidental simulation device, with a coincidental hour of tolerable environment around that simulation.
Just wanting to second what Charlie says here. As best as I can tell the decision-theoretic move made in the Boltzmann Brains section doesn’t work; Neal’s FNC has the result that (a) we become extremely confident that we are boltzmann brains, and (b) we end up having an extremely high time and space discount rate at first approximation and at second approximation we end up acting like solipsists as well, i.e. live in the moment, care only about yourself, etc. This is true even if you are standing in front of a button that would save 10^40 happy human lives via colonizing the light-cone. Because a low-entropy region the size of the light cone is unbelievably less common than a low-entropy region the size of a matrix-simulation pod.