Yes, agents whose inner model is counting possible worlds, assigning probabilities and calculating expected utility can be successful in a wider variety of situations than someone who always picks 1. No, thinking like “an entity that “acts like they have a choice”″ does not generalize well, since “acting like you have choice” leads you to CDT and two-boxing.
Yes, agents whose inner model is counting possible worlds, assigning probabilities and calculating expected utility can be successful in a wider variety of situations than someone who always picks 1. No, thinking like “an entity that “acts like they have a choice”″ does not generalize well, since “acting like you have choice” leads you to CDT and two-boxing.