The problem that is being addressed here is that, (to me at least) it seems that basic TDT doesn’t have a natural way to even represent this possibility.
Is Silas’s claim that TDT can represent this possibility the same (natural) way it represents every other possibility?
Yes, and I believe I showed as much in my comment where I referenced EY’s previous post: you can compute, imperfectly, the way that your innards will affect your attempt to implement the algorithms you’ve selected (and this can mean self-interest, akrasia, corrupted hardware, etc.).
Well, you understood it at the time I made it. After reading Eliezer_Yudkowsky’s deeper exposition of TDT with better understanding of Pearl causality, here’s what I think:
Psy_Kosh is right that in any practical situation, you have to be aware of post-decision interference by your innards. However, I think EY’s TDT causal network for the problem, as shown in AnnaSalamon’s post, is a fair representation of Newcomb’s problem as given. There, you can assume that there’s no interference between you and your box choice because the problem definition allows it.
And with that interpretation, the TDT algorithm is quite brilliant.
Sorry for extreme delay in getting around to replying. Anyways, yeah, I agree that TDT is nice and solves various things. I don’t want to completely toss it out. My point was simply “I think it’s very very important that we modify the original form of it to be able to deal with this issue. Here’s what I think would be one way of doing so that fits with the same sort of principle that that TDT is based on.”
Is Silas’s claim that TDT can represent this possibility the same (natural) way it represents every other possibility?
Yes, and I believe I showed as much in my comment where I referenced EY’s previous post: you can compute, imperfectly, the way that your innards will affect your attempt to implement the algorithms you’ve selected (and this can mean self-interest, akrasia, corrupted hardware, etc.).
Good good. Just making sure I understand at least one of the positions correctly.
Well, you understood it at the time I made it. After reading Eliezer_Yudkowsky’s deeper exposition of TDT with better understanding of Pearl causality, here’s what I think:
Psy_Kosh is right that in any practical situation, you have to be aware of post-decision interference by your innards. However, I think EY’s TDT causal network for the problem, as shown in AnnaSalamon’s post, is a fair representation of Newcomb’s problem as given. There, you can assume that there’s no interference between you and your box choice because the problem definition allows it.
And with that interpretation, the TDT algorithm is quite brilliant.
Sorry for extreme delay in getting around to replying. Anyways, yeah, I agree that TDT is nice and solves various things. I don’t want to completely toss it out. My point was simply “I think it’s very very important that we modify the original form of it to be able to deal with this issue. Here’s what I think would be one way of doing so that fits with the same sort of principle that that TDT is based on.”