But apparently the consequences of this aren’t deterministic after all, since the predictor is fallible. So this doesn’t help.
If you reread my comments, I simplified it by assuming an infallible predictor.
How?
For this, it’s helpful to define another kind of causality (logical causality) as distinct from physical causality. You can’t physically cause something to have never been that way, because physical causality can’t go to the past. But you can use logical causality for that, since the output of your decision determines not only your output, but the output of all equivalent computations across the entire timeline. By Left-boxing even in case of a bomb, you will have made it so that the predictor’s simulation of you has Left-boxed as well, resulting in the bomb never having been there.
If you reread my comments, I simplified it by assuming an infallible predictor.
… so, in other words, you’re not actually talking about the scenario described in the OP. But that’s what my comments have been about, so… everything you said has been a non sequitur…?
You can’t physically cause something to have never been that way, because physical causality can’t go to the past. But you can use logical causality for that, since the output of your decision determines not only your output, but the output of all equivalent computations across the entire timeline. By Left-boxing even in case of a bomb, you will have made it so that the predictor’s simulation of you has Left-boxed as well, resulting in the bomb never having been there.
This really doesn’t answer the question.
Again, the scenario is: you’re looking at the Left box, and there’s a bomb in it. It’s right there in front of you. What do you do?
So, for example, when you say:
By Left-boxing even in case of a bomb, you will have made it so that the predictor’s simulation of you has Left-boxed as well, resulting in the bomb never having been there.
So if you take the Left box, what actually, physically happens?
… so, in other words, you’re not actually talking about the scenario described in the OP. But that’s what my comments have been about, so… everything you said has been a non sequitur…?
See my top-level comment, this is precisely the problem with the scenario descibed in the OP I pointed out. Your reading is standard, but not the intended meaning.
But it’s also puzzling that you can’t ITT this point, to see both meanings, even if you disagree that it’s reasonable to allow/expect the intended one. Perhaps divesting from having an opinion on the object level question might help? Like, what is the point the others are trying to make, specifically, how does it work, regardless of if it’s a wrong point, described in a way that makes no reference to its wrongness/absurdity?
Like with bug reports, it’s not helpful to say that something “doesn’t work at all”, it’s useful to be more specific. There’s some failure of rationality at play here, you are way too intelligent to be incapable of seeing what the point is, so there is some systematic avoidance of allowing yourself to see what is going on. Heighn’s antagonistic dogmatism doesn’t help, but shouldn’t be this debilitating.
As far as your top-level comment, well, my follow-up questions about it remain unanswered…
I dropped out of that conversation because it seemed to be going in circles, and I think I’ve explained everything already. Apparently the conversation continued, green_leaf seems to be making good points, and Heighn continues needlessly upping the heat.
I don’t think object level conversation is helpful at this point, there is some methodological issue in how you think about this that I don’t see an efficient approach to. I’m already way outside the sort of conversational norms I’m trying to follow for the last few years, which is probably making this comment as hopelessly unhelpful as ever, though in 2010 that’d more likely be the default mode of response for me.
Note that it’s my argumentation that’s being called crazy, which is a large factor in the “antagonism” you seem to observe—a word choice I don’t agree with, btw.
About the “needlessly upping the heat”, I’ve tried this discussion from multiple different angles, seeing if we can come to a resolution. So far, no, alas, but not for lack of trying. I will admit some of my reactions were short and a bit provocative, but I don’t appreciate nor agree with your accusations. I have been honest in my reactions.
I’ve been you ten years ago. This doesn’t help, courtesy or honesty (purposes that tend to be at odds with each other) aren’t always sufficient, it’s also necessary to entertain strange points of view that are obviously wrong, in order to talk in another’s language, to de-escalate where escalation won’t help (it might help with feeding norms, but knowing what norms you are feeding is important). And often enough that is still useless and the best thing is to give up. Or at least more decisively overturn the chess board, as I’m doing with some of the last few comments to this post, to avoid remaining in an interminable failure mode.
These norms are interesting in how well they fade into the background, oppose being examined. If you happen to be a programmer or have enough impression of what that might be like, just imagine a programmer team where talking about bugs can be taboo in some circumstances, especially if they are hypothetical bugs imagined out of whole cloth to check if they happen to be there, or brought to attention to see if it’s cheap to put measures in place to prevent their going unnoticed, even if it eventually turns out that they were never there to begin with in actuality. With rationality, that’s hypotheses about how people think, including hypotheses about norms that oppose examination of such hypotheses and norms.
Sorry, I’m having trouble understanding your point here. I understand your analogy (I was a developer), but am not sure what you’re drawing the analogy to.
I see your point, although I have entertained Said’s view as well. But yes, I could have done better. I tend to get like this when my argumentation is being called crazy, and I should have done better.
You could have just told me this instead of complaining about me to Said though.
Yes, the situation does say the bomb is there. But it also says the bomb isn’t there if you Left-box.
At the very least, this is a contradiction, which makes the scenario incoherent nonsense.
(I don’t think it’s actually true that “it also says the bomb isn’t there if you Left-box”—but if it did say that, then the scenario would be inconsistent, and thus impossible to interpret.)
This is misleading. What happens is that the situation you found yourself in doesn’t take place with significant measure. You live mostly in different situations, not this one.
It is misleading because Said’s perspective is to focus on the current situation, without regarding the other situations as decision relevant. From UDT perspective you are advocating, the other situations remain decision relevant, and that explains much of what you are talking about in other replies. But from that same perspective, it doesn’t matter that you live in the situation Said is asking about, so it’s misleading that you keep attention on this situation in your reply without remarking on how that disagrees with the perspective you are advocating in other replies.
In the parent comment, you say “it is, in virtually all possible worlds, that you live for free”. This is confusing: are you talking about the possible worlds within the situation Said was asking about, or also about possible worlds outside that situation? The distinction matters for the argument in these comments, but you are saying this ambiguously.
… so, in other words, you’re not actually talking about the scenario described in the OP. But that’s what my comments have been about, so… everything you said has been a non sequitur…?
No, non sequitur means something else. (If I say “A, therefore B”, but B doesn’t follow from A, that’s a non sequitur.)
I simplified the problem to make it easier for you to understand.
This really doesn’t answer the question.
It does. Your question was “How?”. The answer is “through logical causality.”
So if you take the Left box, what actually, physically happens?
You take the left box with the bomb, and it has always been empty.
It is. The response to your question “So if you take the Left box, what actually, physically happens?” is “Physically, nothing.” That’s why I defined logical causality—it helps understand why (1) is the algorithm with the best expected utility, and why yours is worse.
Do you see how that makes absolutely no sense as an answer to the question I asked? Like, do you see what makes what you said incomprehensible, what makes it appear to be nonsense? I’m not asking you to admit that it’s nonsense, but can you see why it reads as bizarre moon logic?
I’m no longer sure; you and green_leaf appear to have different, contradictory views, and at this point that divergence has confused me enough that I could no longer say confidently what either of you seem to be saying without going back and carefully re-reading all the comments. And that, I’m afraid, isn’t something that I have time for at the moment… so perhaps it’s best to write this discussion off, after all.
Agreed, but I think it’s important to stress that it’s not like you see a bomb, Left-box, and then see it disappear or something. It’s just that Left-boxing means the predictor already predicted that, and the bomb was never there to begin with.
Put differently, you can only Left-box in a world where the predictor predicted you would.
Put differently, you can only Left-box in a world where the predictor predicted you would.
What stops you from Left-boxing in a world where the predictor didn’t predict that you would?
To make the question clearer, let’s set aside all this business about the fallibility of the predictor. Sure, yes, the predictor’s perfect, it can predict your actions with 100% accuracy somehow, something about algorithms, simulations, models, whatever… fine. We take all that as given.
So: you see the two boxes, and after thinking about it very carefully, you reach for the Right box (as the predictor always knew that you would).
But suddenly, a stray cosmic ray strikes your brain! No way this was predictable—it was random, the result of some chain of stochastic events in the universe. And though you were totally going to pick Right, you suddenly grab the Left box instead.
Surely, there’s nothing either physically or logically impossible about this, right?
So if the predictor predicted you’d pick Right, and there’s a bomb in Left, and you have every intention of picking Right, but due to the aforesaid cosmic ray you actually take the Left box… what happens?
It’s just that Left-boxing means the predictor already predicted that, and the bomb was never there to begin with.
But the scenario stipulates that the bomb is there. Given this, taking the Left box results in… what? Like, in that scenario, if you take the Left box, what actually happens?
Agreed, but I think it’s important to stress that it’s not like you see a bomb, Left-box, and then see it disappear or something. It’s just that Left-boxing means the predictor already predicted that, and the bomb was never there to begin with.
Yes, that’s correct.
By executing the first algorithm, the bomb has never been there.
Put differently, you can only Left-box in a world where the predictor predicted you would.
Here it’s useful to distinguish between agentic ‘can’ and physical ‘can.’
Since I assume a deterministic universe for simplification, there is only one physical ‘can.’ But there are two agentic ’can″s—no matter the prediction, I can agentically choose either way. The predictor’s prediction is logically posterior to my choice, and his prediction (and the bomb’s presence) are the way they are because of my choice. So I can Left-box even if there is a bomb in the left box, even though it’s physically impossible.
(It’s better to use agentic can over physical can for decision-making, since that use of can allows us to act as if we determined the output of all computations identical to us, which brings about better results. The agent that uses the physical can as their definition will see the bomb more often.)
If you reread my comments, I simplified it by assuming an infallible predictor.
For this, it’s helpful to define another kind of causality (logical causality) as distinct from physical causality. You can’t physically cause something to have never been that way, because physical causality can’t go to the past. But you can use logical causality for that, since the output of your decision determines not only your output, but the output of all equivalent computations across the entire timeline. By Left-boxing even in case of a bomb, you will have made it so that the predictor’s simulation of you has Left-boxed as well, resulting in the bomb never having been there.
… so, in other words, you’re not actually talking about the scenario described in the OP. But that’s what my comments have been about, so… everything you said has been a non sequitur…?
This really doesn’t answer the question.
Again, the scenario is: you’re looking at the Left box, and there’s a bomb in it. It’s right there in front of you. What do you do?
So, for example, when you say:
So if you take the Left box, what actually, physically happens?
See my top-level comment, this is precisely the problem with the scenario descibed in the OP I pointed out. Your reading is standard, but not the intended meaning.
But it’s also puzzling that you can’t ITT this point, to see both meanings, even if you disagree that it’s reasonable to allow/expect the intended one. Perhaps divesting from having an opinion on the object level question might help? Like, what is the point the others are trying to make, specifically, how does it work, regardless of if it’s a wrong point, described in a way that makes no reference to its wrongness/absurdity?
If a point seems to me to be absurd, then how can I understand or explain how it works (given that I don’t think it works at all)?
As far as your top-level comment, well, my follow-up questions about it remain unanswered…
Like with bug reports, it’s not helpful to say that something “doesn’t work at all”, it’s useful to be more specific. There’s some failure of rationality at play here, you are way too intelligent to be incapable of seeing what the point is, so there is some systematic avoidance of allowing yourself to see what is going on. Heighn’s antagonistic dogmatism doesn’t help, but shouldn’t be this debilitating.
I dropped out of that conversation because it seemed to be going in circles, and I think I’ve explained everything already. Apparently the conversation continued, green_leaf seems to be making good points, and Heighn continues needlessly upping the heat.
I don’t think object level conversation is helpful at this point, there is some methodological issue in how you think about this that I don’t see an efficient approach to. I’m already way outside the sort of conversational norms I’m trying to follow for the last few years, which is probably making this comment as hopelessly unhelpful as ever, though in 2010 that’d more likely be the default mode of response for me.
Note that it’s my argumentation that’s being called crazy, which is a large factor in the “antagonism” you seem to observe—a word choice I don’t agree with, btw.
About the “needlessly upping the heat”, I’ve tried this discussion from multiple different angles, seeing if we can come to a resolution. So far, no, alas, but not for lack of trying. I will admit some of my reactions were short and a bit provocative, but I don’t appreciate nor agree with your accusations. I have been honest in my reactions.
I’ve been you ten years ago. This doesn’t help, courtesy or honesty (purposes that tend to be at odds with each other) aren’t always sufficient, it’s also necessary to entertain strange points of view that are obviously wrong, in order to talk in another’s language, to de-escalate where escalation won’t help (it might help with feeding norms, but knowing what norms you are feeding is important). And often enough that is still useless and the best thing is to give up. Or at least more decisively overturn the chess board, as I’m doing with some of the last few comments to this post, to avoid remaining in an interminable failure mode.
Just… no. Don’t act like you know me, because you don’t. I appreciate you trying to help, but this isn’t the way.
These norms are interesting in how well they fade into the background, oppose being examined. If you happen to be a programmer or have enough impression of what that might be like, just imagine a programmer team where talking about bugs can be taboo in some circumstances, especially if they are hypothetical bugs imagined out of whole cloth to check if they happen to be there, or brought to attention to see if it’s cheap to put measures in place to prevent their going unnoticed, even if it eventually turns out that they were never there to begin with in actuality. With rationality, that’s hypotheses about how people think, including hypotheses about norms that oppose examination of such hypotheses and norms.
Sorry, I’m having trouble understanding your point here. I understand your analogy (I was a developer), but am not sure what you’re drawing the analogy to.
I see your point, although I have entertained Said’s view as well. But yes, I could have done better. I tend to get like this when my argumentation is being called crazy, and I should have done better.
You could have just told me this instead of complaining about me to Said though.
“So if you take the Left box, what actually, physically happens?”
You live. For free. Because the bomb was never there to begin with.
Yes, the situation does say the bomb is there. But it also says the bomb isn’t there if you Left-box.
At the very least, this is a contradiction, which makes the scenario incoherent nonsense.
(I don’t think it’s actually true that “it also says the bomb isn’t there if you Left-box”—but if it did say that, then the scenario would be inconsistent, and thus impossible to interpret.)
That’s what I’ve been saying to you: a contradiction.
And there are two ways to resolve it.
This is misleading. What happens is that the situation you found yourself in doesn’t take place with significant measure. You live mostly in different situations, not this one.
I don’t see how it is misleading. Achmiz asked what actually happens; it is, in virtually all possible worlds, that you live for free.
It is misleading because Said’s perspective is to focus on the current situation, without regarding the other situations as decision relevant. From UDT perspective you are advocating, the other situations remain decision relevant, and that explains much of what you are talking about in other replies. But from that same perspective, it doesn’t matter that you live in the situation Said is asking about, so it’s misleading that you keep attention on this situation in your reply without remarking on how that disagrees with the perspective you are advocating in other replies.
In the parent comment, you say “it is, in virtually all possible worlds, that you live for free”. This is confusing: are you talking about the possible worlds within the situation Said was asking about, or also about possible worlds outside that situation? The distinction matters for the argument in these comments, but you are saying this ambiguously.
No, non sequitur means something else. (If I say “A, therefore B”, but B doesn’t follow from A, that’s a non sequitur.)
I simplified the problem to make it easier for you to understand.
It does. Your question was “How?”. The answer is “through logical causality.”
You take the left box with the bomb, and it has always been empty.
This doesn’t even resemble a coherent answer. Do you really not see how absurd this is?
It doesn’t seem coherent if you don’t understand logical causality.
There is nothing incoherent about both of these being true:
You Left-box under all circumstances (even if there is a bomb in the box)
The expected utility of executing this algorithm is 0 (the best possible)
These two statements can both be true at the same time, and (1) implies (2).
None of that is responsive to the question I actually asked.
It is. The response to your question “So if you take the Left box, what actually, physically happens?” is “Physically, nothing.” That’s why I defined logical causality—it helps understand why (1) is the algorithm with the best expected utility, and why yours is worse.
What do you mean by “Physically, nothing.”? There’s a bomb in there—does it somehow fail to explode? How?
It fails to have ever been there.
Do you see how that makes absolutely no sense as an answer to the question I asked? Like, do you see what makes what you said incomprehensible, what makes it appear to be nonsense? I’m not asking you to admit that it’s nonsense, but can you see why it reads as bizarre moon logic?
I can, although I indeed don’t think it is nonsense.
What do you think our (or specifically my) viewpoint is?
I’m no longer sure; you and green_leaf appear to have different, contradictory views, and at this point that divergence has confused me enough that I could no longer say confidently what either of you seem to be saying without going back and carefully re-reading all the comments. And that, I’m afraid, isn’t something that I have time for at the moment… so perhaps it’s best to write this discussion off, after all.
Of course! Thanks for your time.
You’re still neglecting the other kind of causality, so “nothing” makes no sense to you (since something clearly happens).
I’m tapping out, since I don’t see you putting any effort into understanding this topic.
Agreed, but I think it’s important to stress that it’s not like you see a bomb, Left-box, and then see it disappear or something. It’s just that Left-boxing means the predictor already predicted that, and the bomb was never there to begin with.
Put differently, you can only Left-box in a world where the predictor predicted you would.
What stops you from Left-boxing in a world where the predictor didn’t predict that you would?
To make the question clearer, let’s set aside all this business about the fallibility of the predictor. Sure, yes, the predictor’s perfect, it can predict your actions with 100% accuracy somehow, something about algorithms, simulations, models, whatever… fine. We take all that as given.
So: you see the two boxes, and after thinking about it very carefully, you reach for the Right box (as the predictor always knew that you would).
But suddenly, a stray cosmic ray strikes your brain! No way this was predictable—it was random, the result of some chain of stochastic events in the universe. And though you were totally going to pick Right, you suddenly grab the Left box instead.
Surely, there’s nothing either physically or logically impossible about this, right?
So if the predictor predicted you’d pick Right, and there’s a bomb in Left, and you have every intention of picking Right, but due to the aforesaid cosmic ray you actually take the Left box… what happens?
But the scenario stipulates that the bomb is there. Given this, taking the Left box results in… what? Like, in that scenario, if you take the Left box, what actually happens?
The scenario also stipulates the bomb isn’t there if you Left-box.
What actually happens? Not much. You live. For free.
Yes, that’s correct.
By executing the first algorithm, the bomb has never been there.
Here it’s useful to distinguish between agentic ‘can’ and physical ‘can.’
Since I assume a deterministic universe for simplification, there is only one physical ‘can.’ But there are two agentic ’can″s—no matter the prediction, I can agentically choose either way. The predictor’s prediction is logically posterior to my choice, and his prediction (and the bomb’s presence) are the way they are because of my choice. So I can Left-box even if there is a bomb in the left box, even though it’s physically impossible.
(It’s better to use agentic can over physical can for decision-making, since that use of can allows us to act as if we determined the output of all computations identical to us, which brings about better results. The agent that uses the physical can as their definition will see the bomb more often.)
Unless I’m missing something.