Well first, Stuart discusses these problems in terms of decision theory rather than probability. I think this is a better way of approaching this, as it avoids pointless debates over, eg., the probability that Sleeping Beauty’s coin landed heads when all participants agree as to how she should act, as well as more complicated dilemmas where representing knowledge using probabilities just confuses people.
That said, your ideas could easily be rephrased as decision theoretic rather than epistemic. The framework in Stuart’s paper would suggest imagining what strategy a hypothetical agent with your goals would plan ‘in advance’ and implementing that. I guess it might not be obvious that this gives the correct solution, but the reasons that I think it does come from UDT, which I cannot explain in the space of this comment. There’s a lot available about it on the LW wiki, though alternatively you might find it obvious that the framing in terms of a hypothetical agent is equivalent. (Stuart’s proposed ADT may or may not be equivalent to UDT; it is unclear whether he intends for precommitments to be able to deal with something like a variant of Parfit’s hitchhiker where the driver decides what to do before the hitchhiker comes into existence, but it seems that they wouldn’t. The differences are minor enough anyways.)
You propose an alternative anthropic framework, which indicates that you either disagree that the hypothetical agent framing is equivalent or you disagree that Stuart’s suggestion is the correct way for such an agent to act in such a scenario.
Yes. Anything in particular there you think is relevant?
Well first, Stuart discusses these problems in terms of decision theory rather than probability. I think this is a better way of approaching this, as it avoids pointless debates over, eg., the probability that Sleeping Beauty’s coin landed heads when all participants agree as to how she should act, as well as more complicated dilemmas where representing knowledge using probabilities just confuses people.
That said, your ideas could easily be rephrased as decision theoretic rather than epistemic. The framework in Stuart’s paper would suggest imagining what strategy a hypothetical agent with your goals would plan ‘in advance’ and implementing that. I guess it might not be obvious that this gives the correct solution, but the reasons that I think it does come from UDT, which I cannot explain in the space of this comment. There’s a lot available about it on the LW wiki, though alternatively you might find it obvious that the framing in terms of a hypothetical agent is equivalent. (Stuart’s proposed ADT may or may not be equivalent to UDT; it is unclear whether he intends for precommitments to be able to deal with something like a variant of Parfit’s hitchhiker where the driver decides what to do before the hitchhiker comes into existence, but it seems that they wouldn’t. The differences are minor enough anyways.)
You propose an alternative anthropic framework, which indicates that you either disagree that the hypothetical agent framing is equivalent or you disagree that Stuart’s suggestion is the correct way for such an agent to act in such a scenario.