Cohesive decision theory lacks the logical/algorithmic ontology of UDT and is closer to what we call “updateless EDT/CDT” (the paper itself talks about cohesive versions of both).
Also interested in a response from Sylvester, but I would guess that one of the main critiques is something like Will MacAskill’s Bomb thought experiment, or just intuitions for paying the counterfactual mugger. From my perspective, these do have a point when it comes to humans, since humans seemingly have indexical values, and one way to explain why UDT makes recommendations in these thought experiments that look “bizarre” to many humans is that it assumes away indexical values (via the type signature of its utility function). (It was an implicit and not totally intentional assumption, but it’s unclear how to remove the assumption while retaining nice properties associated with updatelessness.) I’m unsure if indexical values themselves are normative or philosophically justified, and they are probably irrelevant or undesirable when it comes to AIs, but I guess academic philosophers probably take them more for granted and are not as interested in AI (and therefore take a dimmer view on updatelessness/cohesiveness).
But yeah, if there are good critiques/responses aside from these, it would be interesting to learn them.
Cohesive decision theory lacks the logical/algorithmic ontology of UDT and is closer to what we call “updateless EDT/CDT” (the paper itself talks about cohesive versions of both).
Also interested in a response from Sylvester, but I would guess that one of the main critiques is something like Will MacAskill’s Bomb thought experiment, or just intuitions for paying the counterfactual mugger. From my perspective, these do have a point when it comes to humans, since humans seemingly have indexical values, and one way to explain why UDT makes recommendations in these thought experiments that look “bizarre” to many humans is that it assumes away indexical values (via the type signature of its utility function). (It was an implicit and not totally intentional assumption, but it’s unclear how to remove the assumption while retaining nice properties associated with updatelessness.) I’m unsure if indexical values themselves are normative or philosophically justified, and they are probably irrelevant or undesirable when it comes to AIs, but I guess academic philosophers probably take them more for granted and are not as interested in AI (and therefore take a dimmer view on updatelessness/cohesiveness).
But yeah, if there are good critiques/responses aside from these, it would be interesting to learn them.