My thinking goes like this: The difference is that you can make a difference. In the advance- or iterated case, you can causally influence your future behaviour, and so the prediction, too . In the original case, you cannot (where backwards causation is forbidden on pain of triviality).
Of course that’s the oldest reply. But it must be countered, and I don’t see it.
My thinking goes like this: The difference is that you can make a difference. In the advance- or iterated case, you can causally influence your future behaviour, and so the prediction, too . In the original case, you cannot (where backwards causation is forbidden on pain of triviality). Of course that’s the oldest reply. But it must be countered, and I don’t see it.
Why can’t you influence your future behavior in the original case? When you’re trying to optimize your decision algorithm (‘be rational’), you can consider Newcomblike cases even if Omega didn’t actually talk to you yet. And so before you’re actually given the choice, you decide that if you ever are in this sort of situation, you should one-box.
I’m sympathetic to some two-boxing arguments, but once you grant that one-boxing is the rational choice when you knew about the game in advance, you’ve given up the game (since you do actually know about the game in advance).
Alas, this comment really muddies the waters. It leads to Furcas writing something like this:
Of course, it would be even better to pre-commit to one-boxing, since this will indeed affect the kind of brain we have.
Underling asks: if the content of the boxes has already been decided, how can you retroactively effect the content of the boxes?
The problem with what you’ve written, thomblake, is that you seem to agree with Underling that he can’t retroactively change the content of the boxes and thus suggest that the content of the boxes has already been determined by past events, such as whether he has been exposed to these problems before and has pre-committed. (This is only vapidly true to the extent that everything is determined by past events.)
Suppose that Underling has never thought of the Newcomb problem before. The content of the boxes still depends upon what he decides, and his decision is a ‘choice’ just as much as any choice a person ever makes: he can decide which box to pick. And his decision algorithm, which he chooses, will decide the contents of the box.
Explaining why this isn’t a problem with causality requires pointing to the determinism of the system. While Underling has a choice of decision algorithms, his choice has already been determined and affects the contents of the box.
If the universe is not deterministic, this problem violates causality.
My thinking goes like this: The difference is that you can make a difference. In the advance- or iterated case, you can causally influence your future behaviour, and so the prediction, too . In the original case, you cannot (where backwards causation is forbidden on pain of triviality). Of course that’s the oldest reply. But it must be countered, and I don’t see it.
Why can’t you influence your future behavior in the original case? When you’re trying to optimize your decision algorithm (‘be rational’), you can consider Newcomblike cases even if Omega didn’t actually talk to you yet. And so before you’re actually given the choice, you decide that if you ever are in this sort of situation, you should one-box.
I’m sympathetic to some two-boxing arguments, but once you grant that one-boxing is the rational choice when you knew about the game in advance, you’ve given up the game (since you do actually know about the game in advance).
Alas, this comment really muddies the waters. It leads to Furcas writing something like this:
Underling asks: if the content of the boxes has already been decided, how can you retroactively effect the content of the boxes?
The problem with what you’ve written, thomblake, is that you seem to agree with Underling that he can’t retroactively change the content of the boxes and thus suggest that the content of the boxes has already been determined by past events, such as whether he has been exposed to these problems before and has pre-committed. (This is only vapidly true to the extent that everything is determined by past events.)
Suppose that Underling has never thought of the Newcomb problem before. The content of the boxes still depends upon what he decides, and his decision is a ‘choice’ just as much as any choice a person ever makes: he can decide which box to pick. And his decision algorithm, which he chooses, will decide the contents of the box.
Explaining why this isn’t a problem with causality requires pointing to the determinism of the system. While Underling has a choice of decision algorithms, his choice has already been determined and affects the contents of the box.
If the universe is not deterministic, this problem violates causality.