Cool, thanks for the link; I found jessicata’s comment thread there helpful.
I agree that CDT overestimates the accessibility of worlds. I think one way to think about EDT is that is also is just counting worlds, probabilities, and utilities, but you’re calculating your probabilities differently, in a more UDT-ish way.
Consider another variant of this problem, where there are many islands, and the button only kills the psychopaths on its island. If Paul has a historical record that so far, all of the previous buttons that have been pressed were pressed by psychopaths, Paul might nevertheless think that his choice to press the button stems from a different source than psychopathy, and thus it’s worth pressing the button. [Indeed, the spicy take is that EDT doesn’t press the button, CDT does for psychopathic reasons and so dies, and FDT does for non-psychopathic reasons, and so gets the best outcome. ;) ]
Yes, if Paul thinks that he might not be a psychopath who dies, and has a probability associated with it, he would include this possible world in the calculation… obviously? Though this requires further specification of how much he values his life vs life with/without psychopaths around. If he values it infinitely, as most psychopaths do, presumably, then he would not press the button, on an off chance that he is wrong. If the value is finite, then there is a break-even probability where he is indifferent to pressing the button. I don’t understand how it is related to a decision theory, it’s just world counting and EV calculation. I must be missing something, I assume.
Agreed that we need real-valued utilities to make clear recommendations in the case of uncertainty.
I don’t understand how it is related to a decision theory, it’s just world counting and EV calculation. I must be missing something, I assume.
For all of the consequentialist decision theories, I think you can describe what they’re doing as attempting to argmax a probability-weighted sum of utilities across possible worlds, and they differ on how they think actions influence probabilities / their underlying theory of how they specify ‘possible worlds’ and thus what universe they think they’re in. [That is, I think the interesting bit is the part you seem to be handling as an implementation detail.]
Cool, thanks for the link; I found jessicata’s comment thread there helpful.
I agree that CDT overestimates the accessibility of worlds. I think one way to think about EDT is that is also is just counting worlds, probabilities, and utilities, but you’re calculating your probabilities differently, in a more UDT-ish way.
Consider another variant of this problem, where there are many islands, and the button only kills the psychopaths on its island. If Paul has a historical record that so far, all of the previous buttons that have been pressed were pressed by psychopaths, Paul might nevertheless think that his choice to press the button stems from a different source than psychopathy, and thus it’s worth pressing the button. [Indeed, the spicy take is that EDT doesn’t press the button, CDT does for psychopathic reasons and so dies, and FDT does for non-psychopathic reasons, and so gets the best outcome. ;) ]
Yes, if Paul thinks that he might not be a psychopath who dies, and has a probability associated with it, he would include this possible world in the calculation… obviously? Though this requires further specification of how much he values his life vs life with/without psychopaths around. If he values it infinitely, as most psychopaths do, presumably, then he would not press the button, on an off chance that he is wrong. If the value is finite, then there is a break-even probability where he is indifferent to pressing the button. I don’t understand how it is related to a decision theory, it’s just world counting and EV calculation. I must be missing something, I assume.
Agreed that we need real-valued utilities to make clear recommendations in the case of uncertainty.
For all of the consequentialist decision theories, I think you can describe what they’re doing as attempting to argmax a probability-weighted sum of utilities across possible worlds, and they differ on how they think actions influence probabilities / their underlying theory of how they specify ‘possible worlds’ and thus what universe they think they’re in. [That is, I think the interesting bit is the part you seem to be handling as an implementation detail.]