Nice post! As you can probably imagine, I agree with most of the stuff here.
>VII. Identity crises are no defense of CDT
On 1 and 2: This is true, but I’m not sure one-boxing / EDT alone solves this problem. I haven’t thought much about selfish agents in general, though.
Random references that might be of interest:
>V. Monopoly money
As far as I can tell, this kind of point was first made on p. 108 here:
Gardner, Martin (1973). “Free will revisited, with a mind-bending prediction paradox by William Newcomb”. In: Scientific American 229.1, pp. 104–109.
>Finally, consider a version of Newcomb’s problem in which both boxes are transparent
There’s a lot of discussion of this in the philosophical literature. From what I can tell, the case was first proposed in Sect. 10 of:
Gibbard, Allan and William L. Harper (1981). “Counterfactuals and Two Kinds of Expected Utility”. In: Ifs. Conditionals, Belief, Decision, Chance and Time. Ed. by William L. Harper, Robert Stalnaker, and Glenn Pearce. Vol. 15. The University of Western Ontario Series in Philosophy of Science. A Series of Books in Philosophy of Science, Methodology, Epistemology, Logic, History of Science, and Related Fields. Springer, pp. 153–190. doi: 10.1007/978-94-009-9117-0_8
>There is a certain broad class of decision theories, a number of which are associated with the Machine Intelligence Research Institute (MIRI), that put resolving this type of inconsistency in favor of something like “the policy you would’ve wanted to adopt” at center stage.
Another academic, early discussion of updatelessness is:
Gauthier, David (1989). “In the Neighbourhood of the Newcomb-Predictor (Reflections on Rationality)”. In: Proceedings of the Aristotelian Society, New Series, 1988–1989. Vol. 89, pp. 179–194.
Nice post! As you can probably imagine, I agree with most of the stuff here.
>VII. Identity crises are no defense of CDT
On 1 and 2: This is true, but I’m not sure one-boxing / EDT alone solves this problem. I haven’t thought much about selfish agents in general, though.
Random references that might be of interest:
>V. Monopoly money
As far as I can tell, this kind of point was first made on p. 108 here:
Gardner, Martin (1973). “Free will revisited, with a mind-bending prediction paradox by William Newcomb”. In: Scientific American 229.1, pp. 104–109.
Cf. https://casparoesterheld.files.wordpress.com/2018/01/learning-dt.pdf
>the so-called “Tickle Defense” of EDT.
I have my own introduction to the tickle defense, aimed more at people in this community than at philosophers:
https://users.cs.duke.edu/~ocaspar/TickleDefenseIntro.pdf
>Finally, consider a version of Newcomb’s problem in which both boxes are transparent
There’s a lot of discussion of this in the philosophical literature. From what I can tell, the case was first proposed in Sect. 10 of:
Gibbard, Allan and William L. Harper (1981). “Counterfactuals and Two Kinds of Expected Utility”. In: Ifs. Conditionals, Belief, Decision, Chance and Time. Ed. by William L. Harper, Robert Stalnaker, and Glenn Pearce. Vol. 15. The University of Western Ontario Series in Philosophy of Science. A Series of Books in Philosophy of Science, Methodology, Epistemology, Logic, History of Science, and Related Fields. Springer, pp. 153–190. doi: 10.1007/978-94-009-9117-0_8
>There is a certain broad class of decision theories, a number of which are associated with the Machine Intelligence Research Institute (MIRI), that put resolving this type of inconsistency in favor of something like “the policy you would’ve wanted to adopt” at center stage.
Another academic, early discussion of updatelessness is:
Gauthier, David (1989). “In the Neighbourhood of the Newcomb-Predictor (Reflections on Rationality)”. In: Proceedings of the Aristotelian Society, New Series, 1988–1989. Vol. 89, pp. 179–194.