Hm, interesting point about causal decision theory. It seems to me even with CDT I should expect as (causal) consequence of pressing C a higher probability that we get different vaccines than if I had only randomized between button A and B. Because I can expect some probability that the other guy also presses C (which then means we both do). Which would at least increase the overall probability that we get different vaccines, even if I’m not certain that we both press C. Though I find this confusing to reason about.
But anyway, this discussion of indexicals got me thinking of how to precisely express “actions” and “consequences” (outcomes?) in decision theory. And it seems that they should always trivially include an explicit or implicit indexical, not just in cases like the example above. Like for an action X, “I make X true”, and for an outcome Y, “I’m in a world where Y is true”. Something like that. Not sure how significant this is and whether there are counterexamples.
Yeah, it’s confusing to me too. Not sure how to think about this under CDT.
But anyway, this discussion of indexicals got me thinking of how to precisely express “actions” and “consequences” (outcomes?) in decision theory. And it seems that they should always trivially include an explicit or implicit indexical, not just in cases like the example above. Like for an action X, “I make X true”, and for an outcome Y, “I’m in a world where Y is true”. Something like that. Not sure how significant this is and whether there are counterexamples.
I actually got rid of all indexicals in UDT, because I found them too hard to think about, which seemed great for a while, until it occurred to me that humans plausibly have indexical values and maybe it’s not straightforward to translate them into non-indexical values.
See also this comment where I talk about how UDT expresses actions and consequences. Note that “program-that-is-you” is not an indexical, it’s a string that encodes your actual source code. This also makes UDT hard/impossible for humans to use, since we don’t have access to our literal source code. See also UDT shows that decision theory is more puzzling than ever which talks about these problems and others.
Hm, interesting point about causal decision theory. It seems to me even with CDT I should expect as (causal) consequence of pressing C a higher probability that we get different vaccines than if I had only randomized between button A and B. Because I can expect some probability that the other guy also presses C (which then means we both do). Which would at least increase the overall probability that we get different vaccines, even if I’m not certain that we both press C. Though I find this confusing to reason about.
But anyway, this discussion of indexicals got me thinking of how to precisely express “actions” and “consequences” (outcomes?) in decision theory. And it seems that they should always trivially include an explicit or implicit indexical, not just in cases like the example above. Like for an action X, “I make X true”, and for an outcome Y, “I’m in a world where Y is true”. Something like that. Not sure how significant this is and whether there are counterexamples.
Yeah, it’s confusing to me too. Not sure how to think about this under CDT.
I actually got rid of all indexicals in UDT, because I found them too hard to think about, which seemed great for a while, until it occurred to me that humans plausibly have indexical values and maybe it’s not straightforward to translate them into non-indexical values.
See also this comment where I talk about how UDT expresses actions and consequences. Note that “program-that-is-you” is not an indexical, it’s a string that encodes your actual source code. This also makes UDT hard/impossible for humans to use, since we don’t have access to our literal source code. See also UDT shows that decision theory is more puzzling than ever which talks about these problems and others.