I’m sorry. :) I mean that it is perfectly obvious to me that in Cyan’s thought experiment Omega is indeed telling a falsehood to the simulated individuals. How would you argue otherwise?
Of course, the simulated individual has an information disadvantage: she does not know that she is inside a simulation. This permits Omega many ugly lawyery tricks. (“Ha-ha, this is not a five dollar bill, this is a SIMULATED five dollar bill. By the way, you are also simulated, and now I will shut you down, cheapskate.”)
Let me note that I completely agree with the original post, and Cyan’s very interesting question does not invalidate your argument at all. It only means that the source of Omega’s stated infallibility is not simulate-and-postselect.
I didn’t see Cyan’s question as offering any particular position so I didn’t feel obligated to give a reason more thorough than what I wrote elsewhere in the thread.
Omega isn’t assigned the status of Liar until it actually does something. I can imagine myself lying all the time but this doesn’t mean that I have lied. When Omega simulates itself, it can simulate invalid scenarios and then check them off the list of possible outcomes. Since Omega will avoid all scenarios where it will lie, it won’t actually lie. This doesn’t mean that it cannot simulate what would happen if it did lie.
Omega isn’t assigned the status of Liar until it actually does something.
Simulating somebody is doing something, especially from the point of view of the simulated. (Note that in Cyan’s thought experiment she has a consciousness and all.)
We postulated that Omega never lies. The simulated consciousness hears a lie. Now, as far as I can see, you have two major ways out of the contradiction. The first is that it is not Omega that does this lying, but simulated-Omega. The second is that lying to a simulated consciousness does not count as lying, at least not in the real world.
The first is perfectly viable, but it highlights what for me was the main take-home message from Cyan’s thought experiment: That “Omega never lies.” is harder to formalize than it appears.
The second is also perfectly viable, but it will be extremely unpopular here at LW.
What do you mean?
I’m sorry. :) I mean that it is perfectly obvious to me that in Cyan’s thought experiment Omega is indeed telling a falsehood to the simulated individuals. How would you argue otherwise?
Of course, the simulated individual has an information disadvantage: she does not know that she is inside a simulation. This permits Omega many ugly lawyery tricks. (“Ha-ha, this is not a five dollar bill, this is a SIMULATED five dollar bill. By the way, you are also simulated, and now I will shut you down, cheapskate.”)
Let me note that I completely agree with the original post, and Cyan’s very interesting question does not invalidate your argument at all. It only means that the source of Omega’s stated infallibility is not simulate-and-postselect.
I didn’t see Cyan’s question as offering any particular position so I didn’t feel obligated to give a reason more thorough than what I wrote elsewhere in the thread.
Omega isn’t assigned the status of Liar until it actually does something. I can imagine myself lying all the time but this doesn’t mean that I have lied. When Omega simulates itself, it can simulate invalid scenarios and then check them off the list of possible outcomes. Since Omega will avoid all scenarios where it will lie, it won’t actually lie. This doesn’t mean that it cannot simulate what would happen if it did lie.
Simulating somebody is doing something, especially from the point of view of the simulated. (Note that in Cyan’s thought experiment she has a consciousness and all.)
We postulated that Omega never lies. The simulated consciousness hears a lie. Now, as far as I can see, you have two major ways out of the contradiction. The first is that it is not Omega that does this lying, but simulated-Omega. The second is that lying to a simulated consciousness does not count as lying, at least not in the real world.
The first is perfectly viable, but it highlights what for me was the main take-home message from Cyan’s thought experiment: That “Omega never lies.” is harder to formalize than it appears.
The second is also perfectly viable, but it will be extremely unpopular here at LW.
Perhaps I am not fully understanding what you mean by simulation. If I create a simulation, what does this mean?
In this context, something along the lines of whole brain emulation.