That’s what I was trying to do with the Coin Flip Creation :) My guess: once you specify the Smoking Lesion and make it unambiguous, it ceases to be an argument against EDT.
I’d be curious to hear about your other example problems. I’ve done a bunch of research on UDT over the years, implementing it as logical formulas and applying it to all the problems I could find, and I’ve become convinced that it’s pretty much always right. (There are unsolved problems in UDT, like how to treat logical uncertainty or source code uncertainty, but these involve strange situations that other decision theories don’t even think about.) If you can put EDT and UDT in sharp conflict, and give a good argument for EDT’s decision, that would surprise me a lot.
I can only give a clear-cut answer if you reformulate the smoking lesion problem in terms of Omega and specify the UDT agent’s egoism or altruism :-)
That’s what I was trying to do with the Coin Flip Creation :) My guess: once you specify the Smoking Lesion and make it unambiguous, it ceases to be an argument against EDT.
What exactly do you think we need to specify in the Smoking Lesion?
I’d be curious to hear about your other example problems. I’ve done a bunch of research on UDT over the years, implementing it as logical formulas and applying it to all the problems I could find, and I’ve become convinced that it’s pretty much always right. (There are unsolved problems in UDT, like how to treat logical uncertainty or source code uncertainty, but these involve strange situations that other decision theories don’t even think about.) If you can put EDT and UDT in sharp conflict, and give a good argument for EDT’s decision, that would surprise me a lot.