1) “Copy-egoistic” and “copy-altruistic” seems misleading, because Omega creates different agents in the heads and tails case. Plain “egoistic” and “altruistic” would work though.
2) Multiple worlds vs single world should be irrelevant to UDT.
3) I think UDT would one-box if it’s egoistic, and be indifferent if it’s altruistic.
Here’s why I think egoistic UDT would one-box. From the problem setup it’s provable that one-boxing implies finding money in box A. That’s exactly the information that UDT requires for decision making (“logical counterfactual”). It doesn’t need to deduce unconditionally that there’s money in box A or that it will one-box.
I agree with points 1) and 2). Regarding point 3), that’s interesting! Do you think one could also prove that if you don’t smoke, you can’t (or are less likely to) have the gene in the Smoking Lesion? (See also my response to Vladimir Nesov’s comment.)
That’s what I was trying to do with the Coin Flip Creation :) My guess: once you specify the Smoking Lesion and make it unambiguous, it ceases to be an argument against EDT.
I’d be curious to hear about your other example problems. I’ve done a bunch of research on UDT over the years, implementing it as logical formulas and applying it to all the problems I could find, and I’ve become convinced that it’s pretty much always right. (There are unsolved problems in UDT, like how to treat logical uncertainty or source code uncertainty, but these involve strange situations that other decision theories don’t even think about.) If you can put EDT and UDT in sharp conflict, and give a good argument for EDT’s decision, that would surprise me a lot.
My thoughts:
1) “Copy-egoistic” and “copy-altruistic” seems misleading, because Omega creates different agents in the heads and tails case. Plain “egoistic” and “altruistic” would work though.
2) Multiple worlds vs single world should be irrelevant to UDT.
3) I think UDT would one-box if it’s egoistic, and be indifferent if it’s altruistic.
Here’s why I think egoistic UDT would one-box. From the problem setup it’s provable that one-boxing implies finding money in box A. That’s exactly the information that UDT requires for decision making (“logical counterfactual”). It doesn’t need to deduce unconditionally that there’s money in box A or that it will one-box.
I agree with points 1) and 2). Regarding point 3), that’s interesting! Do you think one could also prove that if you don’t smoke, you can’t (or are less likely to) have the gene in the Smoking Lesion? (See also my response to Vladimir Nesov’s comment.)
I can only give a clear-cut answer if you reformulate the smoking lesion problem in terms of Omega and specify the UDT agent’s egoism or altruism :-)
That’s what I was trying to do with the Coin Flip Creation :) My guess: once you specify the Smoking Lesion and make it unambiguous, it ceases to be an argument against EDT.
What exactly do you think we need to specify in the Smoking Lesion?
I’d be curious to hear about your other example problems. I’ve done a bunch of research on UDT over the years, implementing it as logical formulas and applying it to all the problems I could find, and I’ve become convinced that it’s pretty much always right. (There are unsolved problems in UDT, like how to treat logical uncertainty or source code uncertainty, but these involve strange situations that other decision theories don’t even think about.) If you can put EDT and UDT in sharp conflict, and give a good argument for EDT’s decision, that would surprise me a lot.