All it’s really saying is that a normatively rational agent should consider the questions “What should I do in this situation?” and “What would I want to pre-commit to do in this situation?” equivalent.
Fair enough. I’m not exactly qualified to talk about this sort of thing, but I’d still be interested to hear why you think the answers to these two ought to be different. (There’s no guarantee I’ll reply, though!)
Because reality operates in continuous time. In the time interval between now and the moment when I have to make a choice, new information might come in, things might change. Precommitment is loss of flexibility and while there are situations when you get benefits compensating for that loss, in the general case there is no reason to pre-commit.
Precommitment is loss of flexibility and while there are situations when you get benefits compensating for that loss, in the general case there is no reason to pre-commit.
Curiously, this particular claim is true only because Lumifer’s primary claim is false. An ideal CDT agent released at time T with the capability to self modify (or otherwise precommit) will as rapidly as possible (at T + e) make a general precommitment to the entire class of things that can be regretted in advanceonly for the purpose of influencing decisions made after (T + e) (but continue with two-boxing type thinking for the purpose of boxes filled before T + e).
Curiously enough, I made no claims about ideal CDT agents.
True. CDT is merely a steel-man of your position that you actively endorsed in order to claim prestigious affiliation.
The comparison is actually rather more generous than what I would have made myself. CDT has no arbitrary discontinuity between at p=1 and p=(1-e) for example.
That said, the grandparent’s point applies just as well regardless of whether we consider CDT, EDT, the corrupted Lumifer variant of CDT or most other naive but not fundamentally insane decision algorithms. In the general case there is a damn good reason to make an abstract precommittment as soon as possible. UDT is an exception only because such precomittment would be redundant.
I don’t consider them equivalent.
Fair enough. I’m not exactly qualified to talk about this sort of thing, but I’d still be interested to hear why you think the answers to these two ought to be different. (There’s no guarantee I’ll reply, though!)
Because reality operates in continuous time. In the time interval between now and the moment when I have to make a choice, new information might come in, things might change. Precommitment is loss of flexibility and while there are situations when you get benefits compensating for that loss, in the general case there is no reason to pre-commit.
Curiously, this particular claim is true only because Lumifer’s primary claim is false. An ideal CDT agent released at time T with the capability to self modify (or otherwise precommit) will as rapidly as possible (at T + e) make a general precommitment to the entire class of things that can be regretted in advance only for the purpose of influencing decisions made after (T + e) (but continue with two-boxing type thinking for the purpose of boxes filled before T + e).
Curiously enough, I made no claims about ideal CDT agents.
True. CDT is merely a steel-man of your position that you actively endorsed in order to claim prestigious affiliation.
The comparison is actually rather more generous than what I would have made myself. CDT has no arbitrary discontinuity between at p=1 and p=(1-e) for example.
That said, the grandparent’s point applies just as well regardless of whether we consider CDT, EDT, the corrupted Lumifer variant of CDT or most other naive but not fundamentally insane decision algorithms. In the general case there is a damn good reason to make an abstract precommittment as soon as possible. UDT is an exception only because such precomittment would be redundant.