FDT is a valuable idea in that it’s a stepping stone towards / approximation of UDT. Given this, it’s probably a good thing for Eliezer to have written about. Kind of like how Merkle’s Puzzles was an important stepping stone towards RSA, even though there’s no use for it now. You can’t always get a perfect solution the first time when working at the frontier of research. What’s the alternative? You discover something interesting but not quite right, so you don’t publish because you’re worried someone will use your discovery as an example of you being wrong?
Also:
There’s a 1 in a googol chance that he’ll blackmail someone who would give in to the blackmail and a googol-1/googol chance that he’ll blackmail someone who won’t give in to the blackmail.
Is this a typo? We desire not to be blackmailed, so we should give in and pay, since according to those odds, people who give in are almost never blackmailed. Therefore FDT would agree that the best policy in such a situation is to give in.
I was kind of hoping you had more mathematical/empirical stuff. As-is, this post seems to mostly be “Eliezer Yudkowsky Is Frequently, Confidently, and Egregiously In Disagreement With My Own Personal Philosophical Opinions”.
(I have myself observed an actual mathematical/empirical Eliezer-error before: He was arguing that since astronomical observations had shown the universe to be either flat or negatively curved, that demonstrated that it must be infinite. The error being that there are flat and negatively curved spaces that are finite due to the fact that they “loop around” in a fashion similar to the maze in Pac-man. (Another issue is that a flat universe is infinitesimally close to a positively curved one, so that a set of measurements that ruled out a positively curved universe would also rule out a flat one. Except that maybe your prior has a delta spike at zero curvature because simplicity. And then you measure the curvature and it it comes out so close to zero with such tight error bars that most of your probability now lives in the delta spike at zero. That’s a thing that could happen.))
EDIT: I’ve used “FDT” kind of interchangeably with “TDT” here, because in my way of viewing things, they’re very similar decision theories. But it’s important to note that historically, TDT was proposed first, then UDT, and FDT was published much later, as a kind of cleaned up version of TDT. From my perspective, this is a little confusing, since UDT seems superior to both FDT and TDT, but I guess it’s of non-zero value to go back and clean up your old ideas, even if they’ve been made obsolete. Thanks to Wei Dai for pointing out this issue.
FDT is a valuable idea in that it’s a stepping stone towards / approximation of UDT.
You might be thinking of TDT, which was invented prior to UDT. FDT actually came out after UDT. My understanding is that the OP disagrees with the entire TDT/UDT/FDT line of thinking, since they all one-box in Newcomb’s problem and the OP thinks one should two-box.
FDT is a valuable idea in that it’s a stepping stone towards / approximation of UDT. Given this, it’s probably a good thing for Eliezer to have written about. Kind of like how Merkle’s Puzzles was an important stepping stone towards RSA, even though there’s no use for it now. You can’t always get a perfect solution the first time when working at the frontier of research. What’s the alternative? You discover something interesting but not quite right, so you don’t publish because you’re worried someone will use your discovery as an example of you being wrong?
Also:
Is this a typo? We desire not to be blackmailed, so we should give in and pay, since according to those odds, people who give in are almost never blackmailed. Therefore FDT would agree that the best policy in such a situation is to give in.
I was kind of hoping you had more mathematical/empirical stuff. As-is, this post seems to mostly be “Eliezer Yudkowsky Is Frequently, Confidently, and Egregiously In Disagreement With My Own Personal Philosophical Opinions”.
(I have myself observed an actual mathematical/empirical Eliezer-error before: He was arguing that since astronomical observations had shown the universe to be either flat or negatively curved, that demonstrated that it must be infinite. The error being that there are flat and negatively curved spaces that are finite due to the fact that they “loop around” in a fashion similar to the maze in Pac-man. (Another issue is that a flat universe is infinitesimally close to a positively curved one, so that a set of measurements that ruled out a positively curved universe would also rule out a flat one. Except that maybe your prior has a delta spike at zero curvature because simplicity. And then you measure the curvature and it it comes out so close to zero with such tight error bars that most of your probability now lives in the delta spike at zero. That’s a thing that could happen.))
EDIT: I’ve used “FDT” kind of interchangeably with “TDT” here, because in my way of viewing things, they’re very similar decision theories. But it’s important to note that historically, TDT was proposed first, then UDT, and FDT was published much later, as a kind of cleaned up version of TDT. From my perspective, this is a little confusing, since UDT seems superior to both FDT and TDT, but I guess it’s of non-zero value to go back and clean up your old ideas, even if they’ve been made obsolete. Thanks to Wei Dai for pointing out this issue.
You might be thinking of TDT, which was invented prior to UDT. FDT actually came out after UDT. My understanding is that the OP disagrees with the entire TDT/UDT/FDT line of thinking, since they all one-box in Newcomb’s problem and the OP thinks one should two-box.