Thus Boltzmann brains can mess things up from a probability theory standpoint, but we should ignore them from a decision theory standpoint.
Is that true? Imagine you have this choice:
1) Spend the next hour lifting weights
2) Spend the next hour eating chocolate
Lifting weights pays off later, but eating chocolate pays off right away. If you believe there’s a high chance that, conditional on surviving the next hour, you’ll dissolve into Boltzmann foam immediately after that—why not eat the chocolate?
Just taking the question at face value, I would like to choose to lift weights for policy selection reasons. If I eat chocolate, the non-Boltzmann brain versions will eat it too, and I personally care a lot more about non-Boltzmann brain versions of me. Not sure how to square that mathematically with infinite versions of me existing and all, but I was already confused about that.
The theme here seems similar to Stuart’s past writing claiming that a lot of anthropic problems implicitly turn on preference. Seems like the answer to your decision problem easily depends on how much you care about Boltzmann brain versions of yourself.
If you believe you’re a Boltzmann brain, you shouldn’t even be asking the question of what you should do next because you believe that in the next microsecond you won’t exist. If you survive any longer than that, that would be extremely strong evidence that you’re not a Boltzmann brain, so conditional on you actually being able to make a choice of what to do in the next hour, it still makes sense to choose to lift weights.
In a truly max-entropy universe, the probability of being a boltzmann brain that survives for one hour is greater than the probability of being on Earth. High entropy is a weird place.
I’m not so sure about that. An hour-long Boltzmann brain requires an hour of coincidences; a one-off coincidence that produces a habitable environment (maybe not a full Earth, but something that lasts a few hours) seems much more likely.
Sure. I am using “Boltzmann brain” as shorthand for a person who has the same memories as me, but was actually created out of fluctuations in a high entropy, long-lived universe and merely has my memories by coincidence. The most likely way for such a person to have experiences for an hour is probably for them to be connected to some kind of coincidental simulation device, with a coincidental hour of tolerable environment around that simulation.
Just wanting to second what Charlie says here. As best as I can tell the decision-theoretic move made in the Boltzmann Brains section doesn’t work; Neal’s FNC has the result that (a) we become extremely confident that we are boltzmann brains, and (b) we end up having an extremely high time and space discount rate at first approximation and at second approximation we end up acting like solipsists as well, i.e. live in the moment, care only about yourself, etc. This is true even if you are standing in front of a button that would save 10^40 happy human lives via colonizing the light-cone. Because a low-entropy region the size of the light cone is unbelievably less common than a low-entropy region the size of a matrix-simulation pod.
Is that true? Imagine you have this choice:
1) Spend the next hour lifting weights
2) Spend the next hour eating chocolate
Lifting weights pays off later, but eating chocolate pays off right away. If you believe there’s a high chance that, conditional on surviving the next hour, you’ll dissolve into Boltzmann foam immediately after that—why not eat the chocolate?
Just taking the question at face value, I would like to choose to lift weights for policy selection reasons. If I eat chocolate, the non-Boltzmann brain versions will eat it too, and I personally care a lot more about non-Boltzmann brain versions of me. Not sure how to square that mathematically with infinite versions of me existing and all, but I was already confused about that.
The theme here seems similar to Stuart’s past writing claiming that a lot of anthropic problems implicitly turn on preference. Seems like the answer to your decision problem easily depends on how much you care about Boltzmann brain versions of yourself.
New and better reason to ignore Boltzmann brains in (some) anthropic calculations: https://www.lesswrong.com/posts/M9sb3dJNXCngixWvy/anthropics-and-fermi
If you believe you’re a Boltzmann brain, you shouldn’t even be asking the question of what you should do next because you believe that in the next microsecond you won’t exist. If you survive any longer than that, that would be extremely strong evidence that you’re not a Boltzmann brain, so conditional on you actually being able to make a choice of what to do in the next hour, it still makes sense to choose to lift weights.
In a truly max-entropy universe, the probability of being a boltzmann brain that survives for one hour is greater than the probability of being on Earth. High entropy is a weird place.
I’m not so sure about that. An hour-long Boltzmann brain requires an hour of coincidences; a one-off coincidence that produces a habitable environment (maybe not a full Earth, but something that lasts a few hours) seems much more likely.
Sure. I am using “Boltzmann brain” as shorthand for a person who has the same memories as me, but was actually created out of fluctuations in a high entropy, long-lived universe and merely has my memories by coincidence. The most likely way for such a person to have experiences for an hour is probably for them to be connected to some kind of coincidental simulation device, with a coincidental hour of tolerable environment around that simulation.
Just wanting to second what Charlie says here. As best as I can tell the decision-theoretic move made in the Boltzmann Brains section doesn’t work; Neal’s FNC has the result that (a) we become extremely confident that we are boltzmann brains, and (b) we end up having an extremely high time and space discount rate at first approximation and at second approximation we end up acting like solipsists as well, i.e. live in the moment, care only about yourself, etc. This is true even if you are standing in front of a button that would save 10^40 happy human lives via colonizing the light-cone. Because a low-entropy region the size of the light cone is unbelievably less common than a low-entropy region the size of a matrix-simulation pod.