The world you’re describing is just as much a possible world as the ones I describe. That’s my point.
Heighn
Inferring that I don’t burn to death depends on
Omega modelling my decision procedure
Cause and effect from there.
That’s it. No esoteric assumptions. I’m not talking about a multiverse with worlds existing next to each other or whatever, just possible worlds.
That I’m being crazy
That Left-boxing means burning to death
That your answer is obviously correct
Take your pick.
You infer the existence of me burning to death from what’s stated in the problem as well. There’s no difference.
I do have the assumption of subjunctive dependence. But without that one—if, say, the predictor predicts by looking at the color of my shoes—then I don’t Left-box anyway.
Yeah you keep repeating that. Stating it. Saying it’s simple, obvious, whatever. Saying I’m being crazy. But it’s just wrong. So there’s that.
My point is that those “other worlds” are just as much stipulated by the problem statement as that one world you focus on. So, you pay $100 and don’t burn to death. I don’t pay $100, burn to death in 1 world, and live for free in a trillion trillion − 1 worlds. Even if I value my life at $10,000,000,000,000, my choice gives more utility.
(Btw, you can call your answer “obvious” and my side “crazy” all you want, but it won’t change a thing until you actually demonstrate why and how FDT is wrong, which you haven’t done.)
But of course there isn’t actually a contradiction. (Which you know, otherwise you wouldn’t have needed to hedge by saying “in a way”.)
There is, as I explained. There’s 2 ways of resolving it, but yours isn’t one of them. You can’t have it both ways.
It’s simply that the problem says that if you Left-box, then the predictor predicted this, and will not have put a bomb in Left… usually. Almost always! But not quite always. It very rarely makes mistakes! And this time, it would seem, is one of those times.
Just… no. “The predictor predicted this”, yes, so there are a trillion trillion − 1 follow-up worlds where I don’t burn to death! And yes, 1 - just 1 - world where I do. Why choose to focus on that 1 out of a trillion trillion worlds?
Because the problem talks about a bomb in Left?
No. The problem says more than that. It clearly predicts a trillion trillion − 1 worlds where I don’t burn to death. That 1 world where I do sucks, but paying $100 to avoid it seems odd. Unless, of course, you value your life infinitely (which you do I believe?). That’s fine, it does all depend on the specific valuations.
Yes, almost perfectly (well, it has to be “almost”, because it’s also stipulated that the predictor got it wrong this time).
Well, not with your answer, because you Right-box. But anyway.
Why does it matter? We know that there’s a bomb in Left, because the scenario tells us so.
It matters a lot, because in a way the problem description is contradicting itself (which happens more often in Newcomblike problems).
It says there’s a bomb in Left.
It also says that if I Left-box, then the predictor predicted this, and will not have put a Bomb in Left. (Unless you assume the predictor predicts so well by looking at, I don’t know, the color of your shoes or something. But it strongly seems like the predictor has some model of your decision procedure.)
You keep repeating (1), ignoring (2), even though (2) is stipulated just as much as (1).
So, yes, my question whether you understand subjunctive dependence is justified, because you keep ignoring that crucial part of the problem.
“Irrelevant, since the described scenario explicitly stipulates that you find yourself in precisely that situation.”
It also stipulates the predictor predicts almost perfectly. So it’s very relevant.
“Yes, that’s what I’ve been saying: choosing Right in that scenario is the correct decision.”
No, it’s the wrong decision. Right-boxing is just the necessary consequence of the predictor predicting I Right-box. But insofar this is a decision problem, Left-boxing is correct, and then the predictor predicted I would Left-box.
“No, Left-boxing means we burn to death.”
No, it means the model Left-boxed and thus the predictor didn’t put a bomb in Left.
Do you understand how subjunctive dependence works?
This works because Left-boxing means you’re in a world where the predictors model of you also Left-boxed when the predictor made its prediction, causing it to not put a Bomb in Left.
Put differently, the situation described by MacAskill becomes virtually impossible if you Left-box, since the probability of Left-boxing and burning to death is ~0.
OR, alternatively, we say: no, we see the Bomb. We can’t retroactively change this! If we keep that part of the world fixed, then, GIVEN the subjunctive dependence between us and the predictor (assuming it’s there), that simply means we Right-box (with probability ~1), since that’s what the predictor’s model did.
Of course, then it’s not much of a decision theoretic problem anymore, since the decision is already fixed in the problem statement. If we assume we can still make a decision, then that decision is made in 2 places: first by the predictor’s model, then by us. Left-boxing means the model Left-boxes and we get to live for free. Right-boxing means the model Right-boxes and we get to live at a cost of $100. The right decision must be Left-boxing.
FDT recommends knowingly choosing to burn to death? So much the worse for FDT!
This is where, at least in part, your misunderstanding lies (IMO). FDT doesn’t recommend choosing to burn to death. It recommends Left-boxing, which avoids burning to death AND avoids paying $100.
In doing so, FDT beats both CDT and EDT, which both pay $100. It really is as simple as that. The Bomb is an argument for FDT, and quite an excellent one.
Yes, it’s acausal. No disagreement there!
(First, a correction: I said it’s neither causality nor correlation, but it is of course correlation; it’s just stronger than that.)
I’d say yes, but I haven’t thought much about control yet. If I cooperate, so does my twin. If, counterfactually, I defected instead, then my twin would also have defected. I’d see that as control, but it depends on your definition I guess.
in a way that suggests corelation, not causation.
It’s neither causality nor correlation: it’s subjunctive dependence, of which causality is a special case. Since your counterpart is implementing the same decision procedure as you, making decision X in situation Y means your counterpart does X in Y too.
Less relevant now, but I got the “few years” from the post you linked. There Christiano talked about another gap than AGI → ASI, but since overall he seems to expect linear progress, I thought my conclusion was reasonable. In retrospect, I shouldn’t have made that comment.
Thanks for offering your view Paul, and I apologize if I misrepresented your view.
But yes, Christiano is the authority here;)
He’s talking about a gap of years :) Which is probably faster than ideal, but not FOOMy, as I understand FOOM to mean days or hours.
No, it isn’t. In the world that’s stipulated, you still have to make your decision.
That decision is made in my head and in the predictor’s head. That’s the key.