No, I assumed not malevolent would cover that, but I guess it really doesn’t. I added a clause to explicitly point out that Omega isn’t lying.
If Omega never lies, and if Omega makes all predictions by running perfect simulations, then the scenario you gave is inconsistent. For Omega to predict that you will give it $5 after being told that you will give it $5, it must run a simulation of you in which it tells you that it has predicted that you will give it $5. But since it runs this simulation before making the prediction, Omega is lying in the simulation.
I don’t understand this. Breaking it down:
Omega predicts I will give it $5
Omega appears and tells me it predicted I will give it $5
Telling me about the prediction implies that the telling was part of the original prediction
If the telling was part of the original prediction, than it was part of a simulation of future events
The simulation involves Omega telling me but...
This is where I lose the path. But what? I don’t understand where the lie is. If I translate this to real life:
I predict Sally will give me $5
I walk up to Sally and tell her I predict she will give me $5
I then explain that she owes me $5 and she already told me she would give me the $5 today
Sally gives me $5 and calls me weird
Where did I lie?
Omega predicts I will give it $5
Omega appears and tells me it predicted I will give it $5
Omega tells me why I will give it $5
I give Omega $5
I don’t see how including the prediction in the prediction is a lie. It is completely trivial for me, a completely flawed predictor, to include a prediction in its own prediction.
Essentially:
But since it runs this simulation before making the prediction, Omega is lying in the simulation.
No he isn’t, because the simulation is assuming that the statement will be made in the future. Thinking, “Tomorrow, I will say it is Thursday,” does not make me a liar today. You can even say, “Tomorrow, I will say it is today,” and not be lying because “today” is relative to the “tomorrow” in the thought.
Omega saying, “I predict you will act as such when I tell you I have predicted you will act as such,” has no lie.
Omega doesn’t have to simulate people. It just has to know. For example, I know that if Omega says to you “Please accept a million dollars” you’ll take it. I didn’t have to simulate you or Omega to know that.
No it isn’t because the simulated Omega will be saying that after the prediction was made.
When the simulated Omega says “I” it is referring to the Omega that made the prediction.
If Omega runs a simulation for tomorrow that includes it saying, “Today is Thursday,” the Omega in the simulation is not lying.
If Omega runs a simulation that includes it saying, “I say GROK. I have said GROK,” the simulation is not lying, even if Omega has not yet said GROK. The “I” in “I have said” is referring to the Omega of the future. The one that just said GROK.
If Omega runs a simulation that includes it doing X and then saying, “I have done X.” there is no lie.
If Omega runs a simulation that includes it predicting an event and then saying, “I have predicted this event,” there is no lie.
If Omega runs a simulation in some cases (say, due to insufficiency of lesser predictive techniques), and in some of those cases the simulated individual tells Omega to buzz off, has Omega lied to those simulated individuals? (I phrase this as a question because I haven’t been closely following your reasoning, so I’m not arguing for or against anything you’ve written so far—it’s a genuine inquiry, not rhetoric.)
Omega has to make prediction of your behaviour, so it has to simulate you, not itself. Your decision arguments are simulated inside Omega’s processor, with input “Omega tells that it predicts X”. There is no need for Omega to simulate its own decision process, since it is completely irrelevant to this scenario.
In an analogy, I can “simulate” the physics of boling water to predict that if I put my hand in, the water will cool down few degrees, even if I know that I will not put my hand in. I don’t have to simulate a copy of myself which actually puts the hand in, and so you can’t use my prediction to falsify the statement “I never harm myself”.
Of course, if Omega simulates itself, it may run in all sorts of self-referential problems, but that isn’t the point of Omega, and has nothing to do with “Omega never lies”.
I used the phrase “simulated individual”; it was MrHen who was talking about Omega simulating itself, not me. Shouldn’t this reply descend from that comment?
Very clever. The statement “Omega never lies.” is apparently much less innocent than it seems. But I don’t think there is such a problem with the statement “Omega will not lie to you during the experiment.”
I’m sorry. :) I mean that it is perfectly obvious to me that in Cyan’s thought experiment Omega is indeed telling a falsehood to the simulated individuals. How would you argue otherwise?
Of course, the simulated individual has an information disadvantage: she does not know that she is inside a simulation. This permits Omega many ugly lawyery tricks. (“Ha-ha, this is not a five dollar bill, this is a SIMULATED five dollar bill. By the way, you are also simulated, and now I will shut you down, cheapskate.”)
Let me note that I completely agree with the original post, and Cyan’s very interesting question does not invalidate your argument at all. It only means that the source of Omega’s stated infallibility is not simulate-and-postselect.
I didn’t see Cyan’s question as offering any particular position so I didn’t feel obligated to give a reason more thorough than what I wrote elsewhere in the thread.
Omega isn’t assigned the status of Liar until it actually does something. I can imagine myself lying all the time but this doesn’t mean that I have lied. When Omega simulates itself, it can simulate invalid scenarios and then check them off the list of possible outcomes. Since Omega will avoid all scenarios where it will lie, it won’t actually lie. This doesn’t mean that it cannot simulate what would happen if it did lie.
Omega isn’t assigned the status of Liar until it actually does something.
Simulating somebody is doing something, especially from the point of view of the simulated. (Note that in Cyan’s thought experiment she has a consciousness and all.)
We postulated that Omega never lies. The simulated consciousness hears a lie. Now, as far as I can see, you have two major ways out of the contradiction. The first is that it is not Omega that does this lying, but simulated-Omega. The second is that lying to a simulated consciousness does not count as lying, at least not in the real world.
The first is perfectly viable, but it highlights what for me was the main take-home message from Cyan’s thought experiment: That “Omega never lies.” is harder to formalize than it appears.
The second is also perfectly viable, but it will be extremely unpopular here at LW.
The simulated prediction doesn’t need to be accurate. Omega just doesn’t make the prediction to the real you if it is proven inaccurate for the simulated you.
In this sort of scenario, the prediction is not interesting, because it does not affect anything. The subject would give the $5 whether the prediction was made or not.
By “the prediction is not interesting”, I mean that it does not say anything about predictions, or general scenarios involving Omega. It does not illustrate any problem with Omega.
I mean, when Omega has some method, independant of declaring predictions about it, of convincing the subject to give it $5, so it appears, declares the prediction, and then proceeds to use the other method.
I mean, when Omega has some method, independant of declaring predictions about it, of convincing the subject to give it $5, so it appears, declares the prediction, and then proceeds to use the other method.
Omega isn’t using mind-control. Omega just knows what is going to happen. Using the prediction itself as an argument to give you $5 is a complication on the question that I happen to be addressing.
In other words, it doesn’t matter why you give Omega $5.
I said this in the original post:
If this scenario includes a long argument about why you should give it $5, so be it. If it means Omega gives you $10 in return, so be it. But it doesn’t matter for the sake of the question. It matters for the answer, but the question doesn’t need to include these details because the underlying problem is still the same. Omega made a prediction and now you need to act. All of the excuses and whining and arguing will eventually end with you handing Omega $5. Omega’s prediction will have included all of this bickering.
All of the Omega scenarios are more complicated than the one I am talking about. That, exactly, is why I am talking about this one.
In the other Omega scenarios, the predictions are an integral part of the scenario. Remove the prediction and the whole thing falls apart.
In your scenario, the prediction doesn’t matter. Remove the prediction, and everything else is exactly the same.
The specific prediction isn’t important here, but the definition of Omega as a perfect predictor sure is important. This is exactly what I wanted to do: Ignore the details of the prediction and talk about Omega.
Removing the prediction entirely would cause the scenario to fall apart because then we could replace Omega with anything. Omega needs to be here and it needs to be making some prediction. The prediction itself is a causal fact only in the sense that Omega wouldn’t appear before you if it didn’t expect to get $5.
It’s a tautology, and that is my point. The only time Omega would ever appear is if its request would be granted.
In my opinion, it is more accurate to say that the reason behind your action is completely irrelevant. It doesn’t matter that the reason isn’t the prediction itself that is causing you to give Omega $5.
It is therefore absurd that you think your scenario says something about the other because they all involve predictions.
It isn’t really absurd. Placing restrictions on the scenario will cause things to go crazy and it is this craziness that I want to look at.
People still argue about one-boxing. The most obvious, direct application of this post is to show why one-boxing is the correct answer. Newcomb’s problem is actually why I ended up writing this. Every time I started working on the math behind Newcomb’s I would bump into the claim presented in this post and realize that people were going to object.
So, instead of talking about this claim inside of a post on Newcomb’s, I isolated it and presented it on its own. And people still objected to it, so I am glad I did this.
No, I assumed not malevolent would cover that, but I guess it really doesn’t. I added a clause to explicitly point out that Omega isn’t lying.
I don’t understand this. Breaking it down:
Omega predicts I will give it $5
Omega appears and tells me it predicted I will give it $5
Telling me about the prediction implies that the telling was part of the original prediction
If the telling was part of the original prediction, than it was part of a simulation of future events
The simulation involves Omega telling me but...
This is where I lose the path. But what? I don’t understand where the lie is. If I translate this to real life:
I predict Sally will give me $5
I walk up to Sally and tell her I predict she will give me $5
I then explain that she owes me $5 and she already told me she would give me the $5 today
Sally gives me $5 and calls me weird
Where did I lie?
Omega predicts I will give it $5
Omega appears and tells me it predicted I will give it $5
Omega tells me why I will give it $5
I give Omega $5
I don’t see how including the prediction in the prediction is a lie. It is completely trivial for me, a completely flawed predictor, to include a prediction in its own prediction.
Essentially:
No he isn’t, because the simulation is assuming that the statement will be made in the future. Thinking, “Tomorrow, I will say it is Thursday,” does not make me a liar today. You can even say, “Tomorrow, I will say it is today,” and not be lying because “today” is relative to the “tomorrow” in the thought.
Omega saying, “I predict you will act as such when I tell you I have predicted you will act as such,” has no lie.
The simulated Omega says, “I have predicted blah blah blah,” when Omega has made no such prediction yet. That’s a lie.
Omega doesn’t have to simulate people. It just has to know. For example, I know that if Omega says to you “Please accept a million dollars” you’ll take it. I didn’t have to simulate you or Omega to know that.
No it isn’t because the simulated Omega will be saying that after the prediction was made.
When the simulated Omega says “I” it is referring to the Omega that made the prediction.
If Omega runs a simulation for tomorrow that includes it saying, “Today is Thursday,” the Omega in the simulation is not lying.
If Omega runs a simulation that includes it saying, “I say GROK. I have said GROK,” the simulation is not lying, even if Omega has not yet said GROK. The “I” in “I have said” is referring to the Omega of the future. The one that just said GROK.
If Omega runs a simulation that includes it doing X and then saying, “I have done X.” there is no lie.
If Omega runs a simulation that includes it predicting an event and then saying, “I have predicted this event,” there is no lie.
Does the simulated Omega runs its own simulation in order to make its prediction? And does that simulation run its own simulation too?
Either way, I don’t see a lie.
If Omega runs a simulation in some cases (say, due to insufficiency of lesser predictive techniques), and in some of those cases the simulated individual tells Omega to buzz off, has Omega lied to those simulated individuals? (I phrase this as a question because I haven’t been closely following your reasoning, so I’m not arguing for or against anything you’ve written so far—it’s a genuine inquiry, not rhetoric.)
Omega has to make prediction of your behaviour, so it has to simulate you, not itself. Your decision arguments are simulated inside Omega’s processor, with input “Omega tells that it predicts X”. There is no need for Omega to simulate its own decision process, since it is completely irrelevant to this scenario.
In an analogy, I can “simulate” the physics of boling water to predict that if I put my hand in, the water will cool down few degrees, even if I know that I will not put my hand in. I don’t have to simulate a copy of myself which actually puts the hand in, and so you can’t use my prediction to falsify the statement “I never harm myself”.
Of course, if Omega simulates itself, it may run in all sorts of self-referential problems, but that isn’t the point of Omega, and has nothing to do with “Omega never lies”.
I used the phrase “simulated individual”; it was MrHen who was talking about Omega simulating itself, not me. Shouldn’t this reply descend from that comment?
Probably it should, but I was unable (too lazy) to trace the moment where the idea of Omega simulating himself first appeared. Thanks for correction.
This isn’t strictly true.
But I agree with the rest of your point.
It’s true by hypothesis in my original question. It’s possible we’re talking about an empty case—perhaps humans just aren’t that complicated.
Yep. I am just trying to make the distinction clear.
Your question relates to prediction via simulation.
My original point makes no assumption about how Omega predicts.
In the above linked comment, EY noted that simulation wasn’t strictly required for prediction.
We are in violent agreement.
Very clever. The statement “Omega never lies.” is apparently much less innocent than it seems. But I don’t think there is such a problem with the statement “Omega will not lie to you during the experiment.”
I would say no.
Why would you say such a weird thing?
What do you mean?
I’m sorry. :) I mean that it is perfectly obvious to me that in Cyan’s thought experiment Omega is indeed telling a falsehood to the simulated individuals. How would you argue otherwise?
Of course, the simulated individual has an information disadvantage: she does not know that she is inside a simulation. This permits Omega many ugly lawyery tricks. (“Ha-ha, this is not a five dollar bill, this is a SIMULATED five dollar bill. By the way, you are also simulated, and now I will shut you down, cheapskate.”)
Let me note that I completely agree with the original post, and Cyan’s very interesting question does not invalidate your argument at all. It only means that the source of Omega’s stated infallibility is not simulate-and-postselect.
I didn’t see Cyan’s question as offering any particular position so I didn’t feel obligated to give a reason more thorough than what I wrote elsewhere in the thread.
Omega isn’t assigned the status of Liar until it actually does something. I can imagine myself lying all the time but this doesn’t mean that I have lied. When Omega simulates itself, it can simulate invalid scenarios and then check them off the list of possible outcomes. Since Omega will avoid all scenarios where it will lie, it won’t actually lie. This doesn’t mean that it cannot simulate what would happen if it did lie.
Simulating somebody is doing something, especially from the point of view of the simulated. (Note that in Cyan’s thought experiment she has a consciousness and all.)
We postulated that Omega never lies. The simulated consciousness hears a lie. Now, as far as I can see, you have two major ways out of the contradiction. The first is that it is not Omega that does this lying, but simulated-Omega. The second is that lying to a simulated consciousness does not count as lying, at least not in the real world.
The first is perfectly viable, but it highlights what for me was the main take-home message from Cyan’s thought experiment: That “Omega never lies.” is harder to formalize than it appears.
The second is also perfectly viable, but it will be extremely unpopular here at LW.
Perhaps I am not fully understanding what you mean by simulation. If I create a simulation, what does this mean?
In this context, something along the lines of whole brain emulation.
The simulated prediction doesn’t need to be accurate. Omega just doesn’t make the prediction to the real you if it is proven inaccurate for the simulated you.
In this sort of scenario, the prediction is not interesting, because it does not affect anything. The subject would give the $5 whether the prediction was made or not.
It doesn’t matter if the prediction is interesting. The prediction is accurate.
This comment is directly addressing the statement:
By “the prediction is not interesting”, I mean that it does not say anything about predictions, or general scenarios involving Omega. It does not illustrate any problem with Omega.
Okay. To address this point I need to know what, specifically, you were referring to when you said, “this sort of scenario.”
I mean, when Omega has some method, independant of declaring predictions about it, of convincing the subject to give it $5, so it appears, declares the prediction, and then proceeds to use the other method.
Omega isn’t using mind-control. Omega just knows what is going to happen. Using the prediction itself as an argument to give you $5 is a complication on the question that I happen to be addressing.
In other words, it doesn’t matter why you give Omega $5.
I said this in the original post:
All of the Omega scenarios are more complicated than the one I am talking about. That, exactly, is why I am talking about this one.
In the other Omega scenarios, the predictions are an integral part of the scenario. Remove the prediction and the whole thing falls apart.
In your scenario, the prediction doesn’t matter. Remove the prediction, and everything else is exactly the same.
It is therefore absurd that you think your scenario says something about the other beecause they all involve predictions.
The specific prediction isn’t important here, but the definition of Omega as a perfect predictor sure is important. This is exactly what I wanted to do: Ignore the details of the prediction and talk about Omega.
Removing the prediction entirely would cause the scenario to fall apart because then we could replace Omega with anything. Omega needs to be here and it needs to be making some prediction. The prediction itself is a causal fact only in the sense that Omega wouldn’t appear before you if it didn’t expect to get $5.
It’s a tautology, and that is my point. The only time Omega would ever appear is if its request would be granted.
In my opinion, it is more accurate to say that the reason behind your action is completely irrelevant. It doesn’t matter that the reason isn’t the prediction itself that is causing you to give Omega $5.
It isn’t really absurd. Placing restrictions on the scenario will cause things to go crazy and it is this craziness that I want to look at.
People still argue about one-boxing. The most obvious, direct application of this post is to show why one-boxing is the correct answer. Newcomb’s problem is actually why I ended up writing this. Every time I started working on the math behind Newcomb’s I would bump into the claim presented in this post and realize that people were going to object.
So, instead of talking about this claim inside of a post on Newcomb’s, I isolated it and presented it on its own. And people still objected to it, so I am glad I did this.