That seems to me to expand the Newcomb’s Problem greatly—in particular, into the area where you know you’ll meet Omega and can prepare by modifying your internal state. I don’t want to argue definitions, but my understanding of the Newcomb’s Problem is much narrower. To quote Wikipedia,
By the time the game begins, and the player is called upon to choose which boxes to take, the prediction has already been made, and the contents of box B have already been determined.
and that’s clearly not the situation of Joe and Kate.
Perhaps, but it is my understanding that an agent who is programmed to avoid reflective inconsistency would find the two situations equivalent. Is there something I’m missing here?
Reflective inconsistency isn’t that hard to grasp, though, even for a human. All it’s really saying is that a normatively rational agent should consider the questions “What should I do in this situation?” and “What would I want to pre-commit to do in this situation?” equivalent. If that’s the case, then there is no qualitative difference between Newcomb’s Problem and the situation regarding Joe and Kate, at least to a perfectly rational agent. I do agree with you that humans are not perfectly rational. However, don’t you agree that we should still try to be as rational as possible, given our hardware? If so, we should strive to fit our own behavior to the normative standard—and unless I’m misunderstanding something, that means avoiding reflective inconsistency.
All it’s really saying is that a normatively rational agent should consider the questions “What should I do in this situation?” and “What would I want to pre-commit to do in this situation?” equivalent.
Fair enough. I’m not exactly qualified to talk about this sort of thing, but I’d still be interested to hear why you think the answers to these two ought to be different. (There’s no guarantee I’ll reply, though!)
Because reality operates in continuous time. In the time interval between now and the moment when I have to make a choice, new information might come in, things might change. Precommitment is loss of flexibility and while there are situations when you get benefits compensating for that loss, in the general case there is no reason to pre-commit.
Precommitment is loss of flexibility and while there are situations when you get benefits compensating for that loss, in the general case there is no reason to pre-commit.
Curiously, this particular claim is true only because Lumifer’s primary claim is false. An ideal CDT agent released at time T with the capability to self modify (or otherwise precommit) will as rapidly as possible (at T + e) make a general precommitment to the entire class of things that can be regretted in advanceonly for the purpose of influencing decisions made after (T + e) (but continue with two-boxing type thinking for the purpose of boxes filled before T + e).
Curiously enough, I made no claims about ideal CDT agents.
True. CDT is merely a steel-man of your position that you actively endorsed in order to claim prestigious affiliation.
The comparison is actually rather more generous than what I would have made myself. CDT has no arbitrary discontinuity between at p=1 and p=(1-e) for example.
That said, the grandparent’s point applies just as well regardless of whether we consider CDT, EDT, the corrupted Lumifer variant of CDT or most other naive but not fundamentally insane decision algorithms. In the general case there is a damn good reason to make an abstract precommittment as soon as possible. UDT is an exception only because such precomittment would be redundant.
That seems to me to expand the Newcomb’s Problem greatly—in particular, into the area where you know you’ll meet Omega and can prepare by modifying your internal state. I don’t want to argue definitions, but my understanding of the Newcomb’s Problem is much narrower. To quote Wikipedia,
and that’s clearly not the situation of Joe and Kate.
Perhaps, but it is my understanding that an agent who is programmed to avoid reflective inconsistency would find the two situations equivalent. Is there something I’m missing here?
I don’t know what “an agent who is programmed to avoid reflective inconsistency” would do. I am not one and I think no human is.
Reflective inconsistency isn’t that hard to grasp, though, even for a human. All it’s really saying is that a normatively rational agent should consider the questions “What should I do in this situation?” and “What would I want to pre-commit to do in this situation?” equivalent. If that’s the case, then there is no qualitative difference between Newcomb’s Problem and the situation regarding Joe and Kate, at least to a perfectly rational agent. I do agree with you that humans are not perfectly rational. However, don’t you agree that we should still try to be as rational as possible, given our hardware? If so, we should strive to fit our own behavior to the normative standard—and unless I’m misunderstanding something, that means avoiding reflective inconsistency.
I don’t consider them equivalent.
Fair enough. I’m not exactly qualified to talk about this sort of thing, but I’d still be interested to hear why you think the answers to these two ought to be different. (There’s no guarantee I’ll reply, though!)
Because reality operates in continuous time. In the time interval between now and the moment when I have to make a choice, new information might come in, things might change. Precommitment is loss of flexibility and while there are situations when you get benefits compensating for that loss, in the general case there is no reason to pre-commit.
Curiously, this particular claim is true only because Lumifer’s primary claim is false. An ideal CDT agent released at time T with the capability to self modify (or otherwise precommit) will as rapidly as possible (at T + e) make a general precommitment to the entire class of things that can be regretted in advance only for the purpose of influencing decisions made after (T + e) (but continue with two-boxing type thinking for the purpose of boxes filled before T + e).
Curiously enough, I made no claims about ideal CDT agents.
True. CDT is merely a steel-man of your position that you actively endorsed in order to claim prestigious affiliation.
The comparison is actually rather more generous than what I would have made myself. CDT has no arbitrary discontinuity between at p=1 and p=(1-e) for example.
That said, the grandparent’s point applies just as well regardless of whether we consider CDT, EDT, the corrupted Lumifer variant of CDT or most other naive but not fundamentally insane decision algorithms. In the general case there is a damn good reason to make an abstract precommittment as soon as possible. UDT is an exception only because such precomittment would be redundant.