Agreed, but I think it’s important to stress that it’s not like you see a bomb, Left-box, and then see it disappear or something. It’s just that Left-boxing means the predictor already predicted that, and the bomb was never there to begin with.
Put differently, you can only Left-box in a world where the predictor predicted you would.
Put differently, you can only Left-box in a world where the predictor predicted you would.
What stops you from Left-boxing in a world where the predictor didn’t predict that you would?
To make the question clearer, let’s set aside all this business about the fallibility of the predictor. Sure, yes, the predictor’s perfect, it can predict your actions with 100% accuracy somehow, something about algorithms, simulations, models, whatever… fine. We take all that as given.
So: you see the two boxes, and after thinking about it very carefully, you reach for the Right box (as the predictor always knew that you would).
But suddenly, a stray cosmic ray strikes your brain! No way this was predictable—it was random, the result of some chain of stochastic events in the universe. And though you were totally going to pick Right, you suddenly grab the Left box instead.
Surely, there’s nothing either physically or logically impossible about this, right?
So if the predictor predicted you’d pick Right, and there’s a bomb in Left, and you have every intention of picking Right, but due to the aforesaid cosmic ray you actually take the Left box… what happens?
It’s just that Left-boxing means the predictor already predicted that, and the bomb was never there to begin with.
But the scenario stipulates that the bomb is there. Given this, taking the Left box results in… what? Like, in that scenario, if you take the Left box, what actually happens?
Agreed, but I think it’s important to stress that it’s not like you see a bomb, Left-box, and then see it disappear or something. It’s just that Left-boxing means the predictor already predicted that, and the bomb was never there to begin with.
Yes, that’s correct.
By executing the first algorithm, the bomb has never been there.
Put differently, you can only Left-box in a world where the predictor predicted you would.
Here it’s useful to distinguish between agentic ‘can’ and physical ‘can.’
Since I assume a deterministic universe for simplification, there is only one physical ‘can.’ But there are two agentic ’can″s—no matter the prediction, I can agentically choose either way. The predictor’s prediction is logically posterior to my choice, and his prediction (and the bomb’s presence) are the way they are because of my choice. So I can Left-box even if there is a bomb in the left box, even though it’s physically impossible.
(It’s better to use agentic can over physical can for decision-making, since that use of can allows us to act as if we determined the output of all computations identical to us, which brings about better results. The agent that uses the physical can as their definition will see the bomb more often.)
Agreed, but I think it’s important to stress that it’s not like you see a bomb, Left-box, and then see it disappear or something. It’s just that Left-boxing means the predictor already predicted that, and the bomb was never there to begin with.
Put differently, you can only Left-box in a world where the predictor predicted you would.
What stops you from Left-boxing in a world where the predictor didn’t predict that you would?
To make the question clearer, let’s set aside all this business about the fallibility of the predictor. Sure, yes, the predictor’s perfect, it can predict your actions with 100% accuracy somehow, something about algorithms, simulations, models, whatever… fine. We take all that as given.
So: you see the two boxes, and after thinking about it very carefully, you reach for the Right box (as the predictor always knew that you would).
But suddenly, a stray cosmic ray strikes your brain! No way this was predictable—it was random, the result of some chain of stochastic events in the universe. And though you were totally going to pick Right, you suddenly grab the Left box instead.
Surely, there’s nothing either physically or logically impossible about this, right?
So if the predictor predicted you’d pick Right, and there’s a bomb in Left, and you have every intention of picking Right, but due to the aforesaid cosmic ray you actually take the Left box… what happens?
But the scenario stipulates that the bomb is there. Given this, taking the Left box results in… what? Like, in that scenario, if you take the Left box, what actually happens?
The scenario also stipulates the bomb isn’t there if you Left-box.
What actually happens? Not much. You live. For free.
Yes, that’s correct.
By executing the first algorithm, the bomb has never been there.
Here it’s useful to distinguish between agentic ‘can’ and physical ‘can.’
Since I assume a deterministic universe for simplification, there is only one physical ‘can.’ But there are two agentic ’can″s—no matter the prediction, I can agentically choose either way. The predictor’s prediction is logically posterior to my choice, and his prediction (and the bomb’s presence) are the way they are because of my choice. So I can Left-box even if there is a bomb in the left box, even though it’s physically impossible.
(It’s better to use agentic can over physical can for decision-making, since that use of can allows us to act as if we determined the output of all computations identical to us, which brings about better results. The agent that uses the physical can as their definition will see the bomb more often.)
Unless I’m missing something.