I put forward my view that the best solution is to just maximize total utility, which correctly handles the forcing anthropics case, and expressed curiosity as to whether it would handle the outlawing anthropics case.
It now seems my solution does correctly handle the outlawing anthropics case, which would seem to be a data point in its favor.
I don’t think I understand your claim here. We agree that my solution works if you measure utility in paperclips? Why do you think it fails if you measure utility in hedons?
In this comment:
http://lesswrong.com/lw/17d/forcing_anthropics_boltzmann_brains/138u
I put forward my view that the best solution is to just maximize total utility, which correctly handles the forcing anthropics case, and expressed curiosity as to whether it would handle the outlawing anthropics case.
It now seems my solution does correctly handle the outlawing anthropics case, which would seem to be a data point in its favor.
Maximizing total hedonic utility fails the outlawing anthropics case: substitute hedons for paperclips.
I don’t think I understand your claim here. We agree that my solution works if you measure utility in paperclips? Why do you think it fails if you measure utility in hedons?