The Adam and Eve example really helped me understand the correspondence between “ADT average utilitarians” and “CDT average utilitarians”. Thanks!
It’s also kind of funny that one of the inputs is “assume a 50% chance of pregnancy from having sex”—it seems like an odd input to allow in anthropic decision-making, though it can be cashed out in terms of reasoning using a model of the world with certain parameters that look like Markov transition probabilities.
And of course, one shouldn’t forget that, by their own standards, SSA Adam and Eve are making a mistake. (This becomes more obvious if we replace probabilities with frequencies—if we change this “50% chance of pregnancy” into two actual copies of them, one of which will get pregnant, but keep their decisions fixed, we can deterministically money-pump them.) It’s all well and good to reverse-engineer their decisions into a different decision-making format, but we shouldn’t use a framework that can’t imagine people making mistakes.
Weighting rewards according to population is the process of ADT Adam and Eve, who take identical actions to SSA Adam and Eve but can have different reasons. SSA Adam and Eve are trying to value their future reward proportional to how likely they are to receive it. Like, if these people actually existed and you could talk to them about their decision-making process, I imagine that ADT Adam and Eve would say different things than SSA Adam and Eve.
The Adam and Eve example really helped me understand the correspondence between “ADT average utilitarians” and “CDT average utilitarians”. Thanks!
It’s also kind of funny that one of the inputs is “assume a 50% chance of pregnancy from having sex”—it seems like an odd input to allow in anthropic decision-making, though it can be cashed out in terms of reasoning using a model of the world with certain parameters that look like Markov transition probabilities.
And of course, one shouldn’t forget that, by their own standards, SSA Adam and Eve are making a mistake. (This becomes more obvious if we replace probabilities with frequencies—if we change this “50% chance of pregnancy” into two actual copies of them, one of which will get pregnant, but keep their decisions fixed, we can deterministically money-pump them.) It’s all well and good to reverse-engineer their decisions into a different decision-making format, but we shouldn’t use a framework that can’t imagine people making mistakes.
Cheers!
Nope, they are doing the correct decision if they value their own pleasure in an average utilitarian way, for some reason.
Weighting rewards according to population is the process of ADT Adam and Eve, who take identical actions to SSA Adam and Eve but can have different reasons. SSA Adam and Eve are trying to value their future reward proportional to how likely they are to receive it. Like, if these people actually existed and you could talk to them about their decision-making process, I imagine that ADT Adam and Eve would say different things than SSA Adam and Eve.
Ah yes, I misread “SSA Adan and Eve” as “SSA-like ADT Adam and Eve (hence average utilitarian)”.