I have no doubt that TDT is an improvement on CDT, but in order for this to even make sense, we’d have to have some way of thinking about what sort of problem we want our decision theory to solve. Presumably the answer is “the sort of problems which you’re actually likely to face in the real world”.
If that’s so, why do we spend so much time talking about Newcomb problems? Should we ban Omega from our decision theories?
Omega is relevant because AGIs might show each other their source code, at which point they gain the predictive powers, vis-a-vis each other, of Omega.
On the other hand, an AGI running CDT would self-modify to UDT/TDT if running UDT/TDT lead to better outcomes, so maybe we can leave the decision theoretic work to our AGI.
The issue there is that a ‘proof’ of friendliness might rely on a lot of decision theory.
If that’s so, why do we spend so much time talking about Newcomb problems? Should we ban Omega from our decision theories?
Omega is relevant because AGIs might show each other their source code, at which point they gain the predictive powers, vis-a-vis each other, of Omega.
On the other hand, an AGI running CDT would self-modify to UDT/TDT if running UDT/TDT lead to better outcomes, so maybe we can leave the decision theoretic work to our AGI.
The issue there is that a ‘proof’ of friendliness might rely on a lot of decision theory.
If you want to build a smart machine, decision theory seems sooo not the problem.
Deep Blue just maximised its expected success. That worked just fine for beating humans.
We have decision theories. The main problem is implementing approximations to them with limited spacetime.
IMO, this is probably all to do with crazyness about provability—originating from paranoia.
Obsessions with the irrelevant are potentially damaging—due to the risks of caution.