Don’t worry, that’s not an uncomfortable question. UDT and MDT are quite different. UDT is a first-order decision theory. MDT is a way of extending decision theories—so that you take into account uncertainty about which decision theory to use. (So, one can have meta causal decision theory, meta evidential decision theory, and (probably, thought I haven’t worked through it) meta updateless decision theory.)
UDT, as I understand it (and note I’m not at all fluent in UDT or TDT) always one-boxes; whereas if you take decision-theoretic uncertainty into account you should sometimes one-box and sometimes two-box, depending on the relative value of the contents of the two boxes. Also, UDT gets what most decision-theorists consider the wrong answer in the smoking lesion case, whereas the account I defend, meta causal decision theory, doesn’t (or, at least, doesn’t, depending on one’s credences in first-order decision theories).
To illustrate, consider the case:
High-Stakes Predictor II (HSP-II)
Box C is opaque; Box D, transparent. If the Predictor predicts that you choose Box C only, then he puts one wish into Box C, and also a stick of gum. With that wish, you save the lives of 1 million terminally ill children. If he predicts that you choose both Box C and Box D, then he puts nothing into Box C. Box D — transparent to you — contains an identical wish, also with the power to save the lives of 1 million children, so if one had both wishes one would save 2 million children in total. However, Box D contains no gum. One has two options only: choose Box C only, or both Box C and Box D.
In this case, intuitively, should you one box, or two box? My view is clear: that if someone one-boxes in the above case, they made the wrong decision. And it seems to me that this is best explained with appeal to decision-theoretic uncertainty.
Other questions:
Bostrom’s parliamentary model is different. Between EDT and CDT, the intertheoretic comparisons of value are easy, so there’s no need to use the parliamentary analogy—one can just straightforwardly take an expectation over decision theories.
Pascal’s Mugging (aka the “Fanaticism” worry). This is a general issue for attempts to take normative uncertainty into account in one’s decision-making, and not something I discuss in my paper. But if you’re concerned about Pascal’s mugging and, say, think that a bounded Decision Theory is the best way to respond to the problem—then at the meta level you should also have a bounded decision theory (and at the meta meta level, and so on).
UDT is totally supposed to smoke on the smoking lesion problem. That’s kinda the whole point of TDT, UDT, and all the other theories in the family.
It seems to me that your high-stakes predictor case is adequately explained by residual uncertainty about the scenario setup and whether Omega actually predicts you perfectly, which will yield two-boxing by TDT in this case as well. Literal, absolute epistemic certainty will lead to one-boxing, but this is a degree of certainty so great that we find it difficult to stipulate even in our imaginations.
I ought to steal that “stick of chewing gum vs. a million children” to use on anyone who claims that the word of the Bible is certain, but I don’t think I’ve ever met anyone in person who said that.
Can’t we just assume that whatever we do was predicted correctly? The problem does assume an ‘almost certain’ predictor. Shouldn’t that make two-boxing the worst move?
Can’t we just assume that whatever we do was predicted correctly? The problem does assume an ‘almost certain’ predictor. Shouldn’t that make two-boxing the worst move?
Basically yes. The choice is a simple one, with two-boxing being the obviously stupid choice.
Don’t worry, that’s not an uncomfortable question. UDT and MDT are quite different. UDT is a first-order decision theory. MDT is a way of extending decision theories—so that you take into account uncertainty about which decision theory to use. (So, one can have meta causal decision theory, meta evidential decision theory, and (probably, thought I haven’t worked through it) meta updateless decision theory.)
UDT, as I understand it (and note I’m not at all fluent in UDT or TDT) always one-boxes; whereas if you take decision-theoretic uncertainty into account you should sometimes one-box and sometimes two-box, depending on the relative value of the contents of the two boxes. Also, UDT gets what most decision-theorists consider the wrong answer in the smoking lesion case, whereas the account I defend, meta causal decision theory, doesn’t (or, at least, doesn’t, depending on one’s credences in first-order decision theories).
To illustrate, consider the case:
High-Stakes Predictor II (HSP-II) Box C is opaque; Box D, transparent. If the Predictor predicts that you choose Box C only, then he puts one wish into Box C, and also a stick of gum. With that wish, you save the lives of 1 million terminally ill children. If he predicts that you choose both Box C and Box D, then he puts nothing into Box C. Box D — transparent to you — contains an identical wish, also with the power to save the lives of 1 million children, so if one had both wishes one would save 2 million children in total. However, Box D contains no gum. One has two options only: choose Box C only, or both Box C and Box D.
In this case, intuitively, should you one box, or two box? My view is clear: that if someone one-boxes in the above case, they made the wrong decision. And it seems to me that this is best explained with appeal to decision-theoretic uncertainty.
Other questions: Bostrom’s parliamentary model is different. Between EDT and CDT, the intertheoretic comparisons of value are easy, so there’s no need to use the parliamentary analogy—one can just straightforwardly take an expectation over decision theories.
Pascal’s Mugging (aka the “Fanaticism” worry). This is a general issue for attempts to take normative uncertainty into account in one’s decision-making, and not something I discuss in my paper. But if you’re concerned about Pascal’s mugging and, say, think that a bounded Decision Theory is the best way to respond to the problem—then at the meta level you should also have a bounded decision theory (and at the meta meta level, and so on).
UDT is totally supposed to smoke on the smoking lesion problem. That’s kinda the whole point of TDT, UDT, and all the other theories in the family.
It seems to me that your high-stakes predictor case is adequately explained by residual uncertainty about the scenario setup and whether Omega actually predicts you perfectly, which will yield two-boxing by TDT in this case as well. Literal, absolute epistemic certainty will lead to one-boxing, but this is a degree of certainty so great that we find it difficult to stipulate even in our imaginations.
I ought to steal that “stick of chewing gum vs. a million children” to use on anyone who claims that the word of the Bible is certain, but I don’t think I’ve ever met anyone in person who said that.
Can’t we just assume that whatever we do was predicted correctly? The problem does assume an ‘almost certain’ predictor. Shouldn’t that make two-boxing the worst move?
Basically yes. The choice is a simple one, with two-boxing being the obviously stupid choice.