Omega has to make prediction of your behaviour, so it has to simulate you, not itself. Your decision arguments are simulated inside Omega’s processor, with input “Omega tells that it predicts X”. There is no need for Omega to simulate its own decision process, since it is completely irrelevant to this scenario.
In an analogy, I can “simulate” the physics of boling water to predict that if I put my hand in, the water will cool down few degrees, even if I know that I will not put my hand in. I don’t have to simulate a copy of myself which actually puts the hand in, and so you can’t use my prediction to falsify the statement “I never harm myself”.
Of course, if Omega simulates itself, it may run in all sorts of self-referential problems, but that isn’t the point of Omega, and has nothing to do with “Omega never lies”.
I used the phrase “simulated individual”; it was MrHen who was talking about Omega simulating itself, not me. Shouldn’t this reply descend from that comment?
Omega has to make prediction of your behaviour, so it has to simulate you, not itself. Your decision arguments are simulated inside Omega’s processor, with input “Omega tells that it predicts X”. There is no need for Omega to simulate its own decision process, since it is completely irrelevant to this scenario.
In an analogy, I can “simulate” the physics of boling water to predict that if I put my hand in, the water will cool down few degrees, even if I know that I will not put my hand in. I don’t have to simulate a copy of myself which actually puts the hand in, and so you can’t use my prediction to falsify the statement “I never harm myself”.
Of course, if Omega simulates itself, it may run in all sorts of self-referential problems, but that isn’t the point of Omega, and has nothing to do with “Omega never lies”.
I used the phrase “simulated individual”; it was MrHen who was talking about Omega simulating itself, not me. Shouldn’t this reply descend from that comment?
Probably it should, but I was unable (too lazy) to trace the moment where the idea of Omega simulating himself first appeared. Thanks for correction.
This isn’t strictly true.
But I agree with the rest of your point.
It’s true by hypothesis in my original question. It’s possible we’re talking about an empty case—perhaps humans just aren’t that complicated.
Yep. I am just trying to make the distinction clear.
Your question relates to prediction via simulation.
My original point makes no assumption about how Omega predicts.
In the above linked comment, EY noted that simulation wasn’t strictly required for prediction.
We are in violent agreement.