If Omega runs a simulation in some cases (say, due to insufficiency of lesser predictive techniques), and in some of those cases the simulated individual tells Omega to buzz off, has Omega lied to those simulated individuals? (I phrase this as a question because I haven’t been closely following your reasoning, so I’m not arguing for or against anything you’ve written so far—it’s a genuine inquiry, not rhetoric.)
Omega has to make prediction of your behaviour, so it has to simulate you, not itself. Your decision arguments are simulated inside Omega’s processor, with input “Omega tells that it predicts X”. There is no need for Omega to simulate its own decision process, since it is completely irrelevant to this scenario.
In an analogy, I can “simulate” the physics of boling water to predict that if I put my hand in, the water will cool down few degrees, even if I know that I will not put my hand in. I don’t have to simulate a copy of myself which actually puts the hand in, and so you can’t use my prediction to falsify the statement “I never harm myself”.
Of course, if Omega simulates itself, it may run in all sorts of self-referential problems, but that isn’t the point of Omega, and has nothing to do with “Omega never lies”.
I used the phrase “simulated individual”; it was MrHen who was talking about Omega simulating itself, not me. Shouldn’t this reply descend from that comment?
Very clever. The statement “Omega never lies.” is apparently much less innocent than it seems. But I don’t think there is such a problem with the statement “Omega will not lie to you during the experiment.”
I’m sorry. :) I mean that it is perfectly obvious to me that in Cyan’s thought experiment Omega is indeed telling a falsehood to the simulated individuals. How would you argue otherwise?
Of course, the simulated individual has an information disadvantage: she does not know that she is inside a simulation. This permits Omega many ugly lawyery tricks. (“Ha-ha, this is not a five dollar bill, this is a SIMULATED five dollar bill. By the way, you are also simulated, and now I will shut you down, cheapskate.”)
Let me note that I completely agree with the original post, and Cyan’s very interesting question does not invalidate your argument at all. It only means that the source of Omega’s stated infallibility is not simulate-and-postselect.
I didn’t see Cyan’s question as offering any particular position so I didn’t feel obligated to give a reason more thorough than what I wrote elsewhere in the thread.
Omega isn’t assigned the status of Liar until it actually does something. I can imagine myself lying all the time but this doesn’t mean that I have lied. When Omega simulates itself, it can simulate invalid scenarios and then check them off the list of possible outcomes. Since Omega will avoid all scenarios where it will lie, it won’t actually lie. This doesn’t mean that it cannot simulate what would happen if it did lie.
Omega isn’t assigned the status of Liar until it actually does something.
Simulating somebody is doing something, especially from the point of view of the simulated. (Note that in Cyan’s thought experiment she has a consciousness and all.)
We postulated that Omega never lies. The simulated consciousness hears a lie. Now, as far as I can see, you have two major ways out of the contradiction. The first is that it is not Omega that does this lying, but simulated-Omega. The second is that lying to a simulated consciousness does not count as lying, at least not in the real world.
The first is perfectly viable, but it highlights what for me was the main take-home message from Cyan’s thought experiment: That “Omega never lies.” is harder to formalize than it appears.
The second is also perfectly viable, but it will be extremely unpopular here at LW.
Either way, I don’t see a lie.
If Omega runs a simulation in some cases (say, due to insufficiency of lesser predictive techniques), and in some of those cases the simulated individual tells Omega to buzz off, has Omega lied to those simulated individuals? (I phrase this as a question because I haven’t been closely following your reasoning, so I’m not arguing for or against anything you’ve written so far—it’s a genuine inquiry, not rhetoric.)
Omega has to make prediction of your behaviour, so it has to simulate you, not itself. Your decision arguments are simulated inside Omega’s processor, with input “Omega tells that it predicts X”. There is no need for Omega to simulate its own decision process, since it is completely irrelevant to this scenario.
In an analogy, I can “simulate” the physics of boling water to predict that if I put my hand in, the water will cool down few degrees, even if I know that I will not put my hand in. I don’t have to simulate a copy of myself which actually puts the hand in, and so you can’t use my prediction to falsify the statement “I never harm myself”.
Of course, if Omega simulates itself, it may run in all sorts of self-referential problems, but that isn’t the point of Omega, and has nothing to do with “Omega never lies”.
I used the phrase “simulated individual”; it was MrHen who was talking about Omega simulating itself, not me. Shouldn’t this reply descend from that comment?
Probably it should, but I was unable (too lazy) to trace the moment where the idea of Omega simulating himself first appeared. Thanks for correction.
This isn’t strictly true.
But I agree with the rest of your point.
It’s true by hypothesis in my original question. It’s possible we’re talking about an empty case—perhaps humans just aren’t that complicated.
Yep. I am just trying to make the distinction clear.
Your question relates to prediction via simulation.
My original point makes no assumption about how Omega predicts.
In the above linked comment, EY noted that simulation wasn’t strictly required for prediction.
We are in violent agreement.
Very clever. The statement “Omega never lies.” is apparently much less innocent than it seems. But I don’t think there is such a problem with the statement “Omega will not lie to you during the experiment.”
I would say no.
Why would you say such a weird thing?
What do you mean?
I’m sorry. :) I mean that it is perfectly obvious to me that in Cyan’s thought experiment Omega is indeed telling a falsehood to the simulated individuals. How would you argue otherwise?
Of course, the simulated individual has an information disadvantage: she does not know that she is inside a simulation. This permits Omega many ugly lawyery tricks. (“Ha-ha, this is not a five dollar bill, this is a SIMULATED five dollar bill. By the way, you are also simulated, and now I will shut you down, cheapskate.”)
Let me note that I completely agree with the original post, and Cyan’s very interesting question does not invalidate your argument at all. It only means that the source of Omega’s stated infallibility is not simulate-and-postselect.
I didn’t see Cyan’s question as offering any particular position so I didn’t feel obligated to give a reason more thorough than what I wrote elsewhere in the thread.
Omega isn’t assigned the status of Liar until it actually does something. I can imagine myself lying all the time but this doesn’t mean that I have lied. When Omega simulates itself, it can simulate invalid scenarios and then check them off the list of possible outcomes. Since Omega will avoid all scenarios where it will lie, it won’t actually lie. This doesn’t mean that it cannot simulate what would happen if it did lie.
Simulating somebody is doing something, especially from the point of view of the simulated. (Note that in Cyan’s thought experiment she has a consciousness and all.)
We postulated that Omega never lies. The simulated consciousness hears a lie. Now, as far as I can see, you have two major ways out of the contradiction. The first is that it is not Omega that does this lying, but simulated-Omega. The second is that lying to a simulated consciousness does not count as lying, at least not in the real world.
The first is perfectly viable, but it highlights what for me was the main take-home message from Cyan’s thought experiment: That “Omega never lies.” is harder to formalize than it appears.
The second is also perfectly viable, but it will be extremely unpopular here at LW.
Perhaps I am not fully understanding what you mean by simulation. If I create a simulation, what does this mean?
In this context, something along the lines of whole brain emulation.