I didn’t quite follow that last section. How do considerations about boundedness and “only matters if it makes something happen differently” undermine the reasoning you laid out in the “FDT” section, which seems solid to me? Here’s my attempt at a counterargument; hopefully we can have a discussion & clear things up that way.
I am arguing for this thesis: As an altruistic FDT/UDT agent, the optimal move is always “think happy thoughts,” even when you aren’t thinking about Boltzmann Brains or FDT/UDT.
In the space of boltzmann-brains-that-might-be-me, probability/measure is not distributed evenly. Simpler algorithms are more likely/have more measure.
I am probably a simpler algorithm.
So while it is true that for every action a I could choose, there is some chunk of BB’s out there that chooses a, and hence in some sense my choice makes no difference to what the BB’s do but rather only to which ones I am logically correlated with, it’s also true that my choice controls the choice of the largest chunk of BB’s, and so if I choose a then the largest chunk of BB’s chooses a, and if I choose b then the largest chunk of BB’s chooses b.
So I should think happy thoughts.
The argument I just gave was designed to address your point “naively making yourself happy means that your Boltzmann brain copies will be happy: but this isn’t actually increasing the happiness across all Boltzmann brains, just changing which ones are copies of you” but I may have misunderstood it.
P.S. I know earlier you argued that the entropy of a BB doesn’t matter because its contribution to the probability is dwarfed by the contribution of the mass. But as long as it’s nonzero, I think my argument will work: Higher-entropy BB configurations will be more likely, holding mass constant. (Perhaps I should replace “simpler” in the above argument with “higher-entropy” then.)
I didn’t quite follow that last section. How do considerations about boundedness and “only matters if it makes something happen differently” undermine the reasoning you laid out in the “FDT” section, which seems solid to me? Here’s my attempt at a counterargument; hopefully we can have a discussion & clear things up that way.
I am arguing for this thesis: As an altruistic FDT/UDT agent, the optimal move is always “think happy thoughts,” even when you aren’t thinking about Boltzmann Brains or FDT/UDT.
In the space of boltzmann-brains-that-might-be-me, probability/measure is not distributed evenly. Simpler algorithms are more likely/have more measure.
I am probably a simpler algorithm.
So while it is true that for every action a I could choose, there is some chunk of BB’s out there that chooses a, and hence in some sense my choice makes no difference to what the BB’s do but rather only to which ones I am logically correlated with, it’s also true that my choice controls the choice of the largest chunk of BB’s, and so if I choose a then the largest chunk of BB’s chooses a, and if I choose b then the largest chunk of BB’s chooses b.
So I should think happy thoughts.
The argument I just gave was designed to address your point “naively making yourself happy means that your Boltzmann brain copies will be happy: but this isn’t actually increasing the happiness across all Boltzmann brains, just changing which ones are copies of you” but I may have misunderstood it.
P.S. I know earlier you argued that the entropy of a BB doesn’t matter because its contribution to the probability is dwarfed by the contribution of the mass. But as long as it’s nonzero, I think my argument will work: Higher-entropy BB configurations will be more likely, holding mass constant. (Perhaps I should replace “simpler” in the above argument with “higher-entropy” then.)