Thanks for putting this together! Lots of ideas I hadn’t seen before.
As for the meta-level problem, I agree with MSRayne to do the thing that maximises EU which leads me to the ADT/UDT approach. This assumes we can have some non-anthropic prior, which seems reasonable to me.
I think that the problem (one of them) here is that my utility function may include some indexical preferences. Like “I want to be in simulation”. Or “I don’t want to be a Boltzmann brain”. In that case, I return to the need of updating, as I again have to take into account my indexicals.
Also, it allows the existence of “utility monster”: that I should act as if I will have the biggest possible impact on the future of humanity, even if prior odds of that is small.
Thanks for putting this together! Lots of ideas I hadn’t seen before.
As for the meta-level problem, I agree with MSRayne to do the thing that maximises EU which leads me to the ADT/UDT approach. This assumes we can have some non-anthropic prior, which seems reasonable to me.
I think that the problem (one of them) here is that my utility function may include some indexical preferences. Like “I want to be in simulation”. Or “I don’t want to be a Boltzmann brain”. In that case, I return to the need of updating, as I again have to take into account my indexicals.
Also, it allows the existence of “utility monster”: that I should act as if I will have the biggest possible impact on the future of humanity, even if prior odds of that is small.