I object to the framing of the bomb scenario on the grounds that low probabilities of high stakes are a source of cognitive bias that trip people up for reasons having nothing to do with FDT. Consider the following decision problem: “There is a button. If you press the button, you will be given $100. Also, pressing the button has a very small (one in a trillion trillion) chance of causing you to burn to death.” Most people would not touch that button. Using the same payoffs and probabilies in a scenario to challenge FDT thus exploits cognitive bias to make FDT look bad. A better scenario would be to replace the bomb with something that will fine you $1000 (and, if you want, also increase the chance of of error).
But then, it seems to me, that FDT has lost much of its initial motivation: the case for one-boxing in Newcomb’s problem didn’t seem to stem from whether the Predictor was running a simulation of me, or just using some other way to predict what I’d do.
I think the crucial difference here is how easily you can cause the predictor to be wrong. In the case where the predictor simulates you, if you two-box, then the predictor expects you to two-box. In the case where the predictor uses your nationality to predict your behavior, Scots usually one-box, and you’re Scottish, if you two-box, then the predictor will still expect you to one-box because you’re Scottish.
But now suppose that the pathway by which S causes there to be money in the opaque box or not is that another agent looks at S...
I didn’t think that was supposed to matter at all? I haven’t actually read the FDT paper, and have mostly just been operating under the assumption that FDT is basically the same as UDT, but UDT didn’t build in any dependency on external agents, and I hadn’t heard about any such dependency being introduced in FDT; it would surprise me if it did.
I object to the framing of the bomb scenario on the grounds that low probabilities of high stakes are a source of cognitive bias that trip people up for reasons having nothing to do with FDT. Consider the following decision problem: “There is a button. If you press the button, you will be given $100. Also, pressing the button has a very small (one in a trillion trillion) chance of causing you to burn to death.” Most people would not touch that button. Using the same payoffs and probabilies in a scenario to challenge FDT thus exploits cognitive bias to make FDT look bad. A better scenario would be to replace the bomb with something that will fine you $1000 (and, if you want, also increase the chance of of error).
I think the crucial difference here is how easily you can cause the predictor to be wrong. In the case where the predictor simulates you, if you two-box, then the predictor expects you to two-box. In the case where the predictor uses your nationality to predict your behavior, Scots usually one-box, and you’re Scottish, if you two-box, then the predictor will still expect you to one-box because you’re Scottish.
I didn’t think that was supposed to matter at all? I haven’t actually read the FDT paper, and have mostly just been operating under the assumption that FDT is basically the same as UDT, but UDT didn’t build in any dependency on external agents, and I hadn’t heard about any such dependency being introduced in FDT; it would surprise me if it did.