If you are suggesting that as a counterexample—that a powerful bayesian model based learning agent (ie EDT) would incorrectly believe that stork populations cause human births (or more generally would confuse causation and correlation), then no I do not agree.
A reasonably powerful world model would correctly predict that changes to stork populations are not directly very predictive of human births.
“A reasonably powerful world model” would correctly predict that being FDT agent is better than being EDT and modify itself into FDT agent, because there are several problems where both CDT and EDT fail in comparison with FDT (see original FDT paper).
False—for example the parfit’s setup doesn’t compare EDT and FDT on exact bit-equivalent environments and action choices—see my reply here.
For the environment where you are stranded in the desert talking with the driver, the optimal implicit action is to agree to pay them, and precommit to this (something humans do without too much trouble all the time). EDT obviously can make that optimal decision given the same decision options that FDT has.
For the environment where you are in the city having already received the ride, and you didn’t already precommit (agree to pay in advance), EDT also makes the optimal action of not paying.
FDT’s supposed superiority is a misdirection based on allowing it new preactions before the main action.
Like this?
If you are suggesting that as a counterexample—that a powerful bayesian model based learning agent (ie EDT) would incorrectly believe that stork populations cause human births (or more generally would confuse causation and correlation), then no I do not agree.
A reasonably powerful world model would correctly predict that changes to stork populations are not directly very predictive of human births.
“A reasonably powerful world model” would correctly predict that being FDT agent is better than being EDT and modify itself into FDT agent, because there are several problems where both CDT and EDT fail in comparison with FDT (see original FDT paper).
False—for example the parfit’s setup doesn’t compare EDT and FDT on exact bit-equivalent environments and action choices—see my reply here.
For the environment where you are stranded in the desert talking with the driver, the optimal implicit action is to agree to pay them, and precommit to this (something humans do without too much trouble all the time). EDT obviously can make that optimal decision given the same decision options that FDT has.
For the environment where you are in the city having already received the ride, and you didn’t already precommit (agree to pay in advance), EDT also makes the optimal action of not paying.
FDT’s supposed superiority is a misdirection based on allowing it new preactions before the main action.