This setup plays a computational trick, and as a result I don’t think it violates the optimality standard I proposed. In order to decide what it should do, the CDT agent needs to think strictly longer than the UDT agent. But if the CDT agent thinks longer than the UDT agent, it’s totally unsurprising that it does better! (Basically, the problem just consists of a computational question which is chosen to be slightly too complex for the UDT agent. But the CDT agent is allowed to think as long as it likes. This entire family of problems appears to be predicated on the lack of computational limits for our agents.)
As a result, if the UDT agent is told what the CDT agent decides, then it can get the same performance as the CDT agent. This seems to illustrate that the CDT agent isn’t doing better by being wiser, just by knowing something the UDT agent doesn’t. (I wasn’t actually thinking about this case when I introduced the weakened criterion; the weakening is obviously necessary for UDT with 10 years of time to compete with CDT with 11 years of time, and I included it for that reason.)
Does this seem right? If so, is there a way to set up the problem that violates my weakened standard?
Incidentally, this problem involves a discontinuous dependence on UDT’s decision(both by the competitor and by the environment). I wonder if this discontinuous dependence is necessary?
Nevermind, we get the same problem in the Newcomb’s problem case when the simulated agent is running UDT, i.e. where P(big box contains $1M) = P(UDT agent 1-boxes after being told that the CDT agent 2-boxes).
This setup plays a computational trick, and as a result I don’t think it violates the optimality standard I proposed. In order to decide what it should do, the CDT agent needs to think strictly longer than the UDT agent. But if the CDT agent thinks longer than the UDT agent, it’s totally unsurprising that it does better! (Basically, the problem just consists of a computational question which is chosen to be slightly too complex for the UDT agent. But the CDT agent is allowed to think as long as it likes. This entire family of problems appears to be predicated on the lack of computational limits for our agents.)
As a result, if the UDT agent is told what the CDT agent decides, then it can get the same performance as the CDT agent. This seems to illustrate that the CDT agent isn’t doing better by being wiser, just by knowing something the UDT agent doesn’t. (I wasn’t actually thinking about this case when I introduced the weakened criterion; the weakening is obviously necessary for UDT with 10 years of time to compete with CDT with 11 years of time, and I included it for that reason.)
Does this seem right? If so, is there a way to set up the problem that violates my weakened standard?
Incidentally, this problem involves a discontinuous dependence on UDT’s decision(both by the competitor and by the environment). I wonder if this discontinuous dependence is necessary?
Nevermind, we get the same problem in the Newcomb’s problem case when the simulated agent is running UDT, i.e. where P(big box contains $1M) = P(UDT agent 1-boxes after being told that the CDT agent 2-boxes).