Hi, I’ve read your paper on anthropic decision theory. Personally I think that it givens the most complete explaination to bets and decisions related to paradoxes such as sleeping beauty problem. I cited it in my paper and recommended it whenever a discussion about bets in sleeping beauty problem comes up. That being said I feel tackling the anthropic paradoxes as purely decision making problems is very counter-intuitive.
Take the Doomsday argument for example. The explanation you provided here illustrates why someone would bet heavily in favour of doom soon given the reward setup, even when he do not assign a higher probability to it. That his objective is to max average utility. However that seems to be different from what the original doomsday argument is about. In its original form it demonstrates a Bayesian update on my birth rank would shift the probability towards doom soon. My preference towards average or total utility plays no part in its logic. So there is a distinction between actually believing in doom soon and strategically betting on doom soon base on some utility objective. Base on this I think we cannot bypass the probabilities and only discuss decision making in anthropic related paradoxes.
Hi, I’ve read your paper on anthropic decision theory. Personally I think that it givens the most complete explaination to bets and decisions related to paradoxes such as sleeping beauty problem. I cited it in my paper and recommended it whenever a discussion about bets in sleeping beauty problem comes up. That being said I feel tackling the anthropic paradoxes as purely decision making problems is very counter-intuitive.
Take the Doomsday argument for example. The explanation you provided here illustrates why someone would bet heavily in favour of doom soon given the reward setup, even when he do not assign a higher probability to it. That his objective is to max average utility. However that seems to be different from what the original doomsday argument is about. In its original form it demonstrates a Bayesian update on my birth rank would shift the probability towards doom soon. My preference towards average or total utility plays no part in its logic. So there is a distinction between actually believing in doom soon and strategically betting on doom soon base on some utility objective. Base on this I think we cannot bypass the probabilities and only discuss decision making in anthropic related paradoxes.