I don’t get it, I have to admit. All the experiment seems to be saying is that “if I take the $1, I exist only as a short term simulation in Omega’s mind”. It says you don’t exist as a long-term seperate individual, but doesn’t say you don’t exist in this very moment...
Simulation is a very specific form of prediction (but the most intuitive, when it comes to prediction of difficult decisions). Prediction doesn’t imply simulation. At this very moment I predict that you will choose to NOT cut your own hand off with an axe when asked to, but I’m not simulating you.
In that case (I’ll return to the whole simulation/prediction issue some other time), I don’t follow the logic at all. If Omega offers you that deal, and you take the money, all that you have shown is that Omega is in error.
But maybe its a consequence of advanced decision theory?
That’s the central issue of this paradox: the part of the scenario before you take the money can actually exist, but if you choose to take the money, it follows that it doesn’t. The paradox doesn’t take for granted that the described scenario does take place, it describes what happens (could happen) from your perspective, in a way in which you’d plan your own actions, not from the external perspective.
Think of your thought process in the case where in the end you decide not to take the money: how you consider taking the money, and what that action would mean (that is, what’s its effect in the generalized sense of TDT, like the effect of you cooperating in PD on the other player or the effect of one-boxing on contents of the boxes). I suggest that the planned action of taking the money means that you don’t exist in that scenario.
I see it, somewhat. But this sounds a lot like “I’m Omega, I am trustworthy and accurate, and I will only speak to you if I’ve predicted you will not imagine a pink rhinoceros as soon as you hear this sentence”.
The correct conclusion seems to be that Omega is not what he says he is, rather than “I don’t exist”.
When the problem contains a self-contradiction like this, there is not actually one “obvious” proposition which must be false. One of them must be false, certainly, but it is not possible to derive which one from the problem statement.
Compare this problem to another, possibly more symmetrical, problem with self-contradictory premises:
The decision diagonal in TDT is a simple computation (at least, it looks simple assuming large complicated black-boxes, like a causal model of reality) and there’s no particular reason that equation can only execute in sentient contexts. Faced with Omega in this case, I take the $1 - there is no reason for me not to do so—and conclude that Omega incorrectly executed the equation in the context outside my own mind.
Even if we suppose that “cogito ergo sum” presents an extra bit of evidence to me, whereby I truly know that I am the “real” me and not just the simple equation in a nonsentient context, it is still easy enough for Omega to simulate that equation plus the extra (false) bit of info, thereby recorrelating it with me.
If Omega really follows the stated algorithm for Omega, then the decision equation never executes in a sentient context. If it executes in a sentient context, then I know Omega wasn’t following the stated algorithm. Just like if Omega says “I will offer you this $1 only if 1 = 2” and then offers you the $1.
I don’t get it, I have to admit. All the experiment seems to be saying is that “if I take the $1, I exist only as a short term simulation in Omega’s mind”. It says you don’t exist as a long-term seperate individual, but doesn’t say you don’t exist in this very moment...
Simulation is a very specific form of prediction (but the most intuitive, when it comes to prediction of difficult decisions). Prediction doesn’t imply simulation. At this very moment I predict that you will choose to NOT cut your own hand off with an axe when asked to, but I’m not simulating you.
In that case (I’ll return to the whole simulation/prediction issue some other time), I don’t follow the logic at all. If Omega offers you that deal, and you take the money, all that you have shown is that Omega is in error.
But maybe its a consequence of advanced decision theory?
That’s the central issue of this paradox: the part of the scenario before you take the money can actually exist, but if you choose to take the money, it follows that it doesn’t. The paradox doesn’t take for granted that the described scenario does take place, it describes what happens (could happen) from your perspective, in a way in which you’d plan your own actions, not from the external perspective.
Think of your thought process in the case where in the end you decide not to take the money: how you consider taking the money, and what that action would mean (that is, what’s its effect in the generalized sense of TDT, like the effect of you cooperating in PD on the other player or the effect of one-boxing on contents of the boxes). I suggest that the planned action of taking the money means that you don’t exist in that scenario.
I see it, somewhat. But this sounds a lot like “I’m Omega, I am trustworthy and accurate, and I will only speak to you if I’ve predicted you will not imagine a pink rhinoceros as soon as you hear this sentence”.
The correct conclusion seems to be that Omega is not what he says he is, rather than “I don’t exist”.
When the problem contains a self-contradiction like this, there is not actually one “obvious” proposition which must be false. One of them must be false, certainly, but it is not possible to derive which one from the problem statement.
Compare this problem to another, possibly more symmetrical, problem with self-contradictory premises:
http://en.wikipedia.org/wiki/Irresistible_force_paradox
The decision diagonal in TDT is a simple computation (at least, it looks simple assuming large complicated black-boxes, like a causal model of reality) and there’s no particular reason that equation can only execute in sentient contexts. Faced with Omega in this case, I take the $1 - there is no reason for me not to do so—and conclude that Omega incorrectly executed the equation in the context outside my own mind.
Even if we suppose that “cogito ergo sum” presents an extra bit of evidence to me, whereby I truly know that I am the “real” me and not just the simple equation in a nonsentient context, it is still easy enough for Omega to simulate that equation plus the extra (false) bit of info, thereby recorrelating it with me.
If Omega really follows the stated algorithm for Omega, then the decision equation never executes in a sentient context. If it executes in a sentient context, then I know Omega wasn’t following the stated algorithm. Just like if Omega says “I will offer you this $1 only if 1 = 2” and then offers you the $1.