It seems a real strain to describe either of them as unfair to TDT.
From this side of the screen, this looks like a property of you, not the problems. If we replace the statement about “relative numbers” in the future (we were having to make assumptions about that anyhow, so let’s just save time and stick in the assumptions), then problem 2 reads “I simulated the best decision theory by definition X and put the money in the place it doesn’t choose.” This demonstrates that no matter how good a decision theory is by any definition, it can still get hosed by Omega. In this case we’re assuming that definition X is maximized by TDT (thus, it’s a unique specification), and yea, TDT did go forth and get hosed by Omega.
So there’s a class of problems where failure is actually a good sign? Interesting. You might want to post further on that, actually.
Hm, yeah. After some computational work at least. Every decision procedure can get hosed by Omega, and the way in which it gets hosed is diagnostic of its properties. Though not uniquely, I guess, so you can’t say “it fails this special test therefore it is good.”
So there’s a class of problems where failure is actually a good sign? Interesting. You might want to post further on that, actually.
Hm, yeah. After some computational work at least. Every decision procedure can get hosed by Omega, and the way in which it gets hosed is diagnostic of its properties. Though not uniquely, I guess, so you can’t say “it fails this special test therefore it is good.”