My Fundamental Question About Omega
Omega has appeared to us inside of puzzles, games, and questions. The basic concept behind Omega is that it is (a) a perfect predictor and (b) not malevolent. The practical implications behind these points are that (a) it doesn’t make mistakes and (b) you can trust its motives in the sense that it really, honestly doesn’t care about you. This bugger is True Neutral and is good at it. And it doesn’t lie.
A quick peek at Omega’s presence on LessWrong reveals Newcomb’s problem and Counterfactual Mugging as the most prominent examples. For those that missed them, other articles include Bead Jars and The Lifespan Dilemma.
Counterfactual Mugging was the most annoying for me, however, because I thought the answer was completely obvious and apparently the answer isn’t obvious. Instead of going around in circles with a complicated scenario I decided to find a simpler version that reveals what I consider to my the fundamental confusion about Omega.
Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do? If Omega is a perfect predictor, and it predicted you will give it $5… will you give it $5 dollars?
The answer to this question is probably obvious but I am curious if we all end up with the same obvious answer.
The fundamental problem behind Omega is how to resolve a claim by a perfect predictor that includes a decision you and you alone are responsible for making. This invokes all sorts of assumptions about choice and free-will, but in terms of phrasing the question these assumptions do not matter. I care about how you will act. What action will you take? However you label the source of these actions is your prerogative. The question doesn’t care how you got there; it cares about the answer.
My answer is that you will give Omega $5. If you don’t, Omega wouldn’t have made the prediction. If Omega made the prediction AND you don’t give $5 than the definition of Omega is flawed and we have to redefine Omega.
A possible objection to the scenario is that the prediction itself is impossible to make. If Omega is a perfect predictor it follows that it would never make an impossible prediction and the prediction “you will give Omega $5” is impossible. This is invalid, however, as long as you can think of at least one scenario where you have a good reason to give Omega $5. Omega would show up in that scenario and ask for $5.
If this scenario includes a long argument about why you should give it $5, so be it. If it means Omega gives you $10 in return, so be it. But it doesn’t matter for the sake of the question. It matters for the answer, but the question doesn’t need to include these details because the underlying problem is still the same. Omega made a prediction and now you need to act. All of the excuses and whining and arguing will eventually end with you handing Omega $5. Omega’s prediction will have included all of this bickering.
This question is essentially the same as saying, “If you have a good reason to give Omega $5 then you will give Omega $5.” It should be a completely uninteresting, obvious question. It holds some implications on other scenarios involving Omega but those are for another time. Those implications should have no bearing on the answer to this question.
Any puzzlement we feel when reading such thought experiments would, I suspect, evaporate if we paid more attention to pragmatics.
The set-up of the scenario (“Suppose that Omega, etc.”) presupposes some things. The question “What do you do?” presupposes other things. Not too surprisingly, these two sets of presuppositions are in conflict.
Specifically, the question “What do you do” presupposes, as parts of its conditions of felicity, that it follows a set-up in which all of the relevant facts have been presented. There is no room left to spring further facts on you later, and we would regard that as cheating. (“You will in fact give $5 to Omega because he has slipped a drug into your drink which causes you to do whatever he suggests you will do!”)
The presuppositions of “What do you do” lead us to assume that we are going about our normal lives, when suddenly some guy appears before us, introduces himself as Omega, says “You will now give me $5″, and looks at us expectantly. Whereupon we nod politely (or maybe say something less polite), and go on our way. From which all we can deduce is that this wasn’t in fact the Omega about which the Tales of Newcomb were written, since he’s just been shown up as an imperfect predictor.
The presuppositions carried by “Omega is a perfect predictor” are of an entirely different order. Logically, whatever predictions Omega makes will in fact turn out to have been correct. But these presuppositions simply don’t match up with those of the “What do you do?” question, in which what determines your behaviour is only the ordinary facts of the world as you know it, plus whatever facts are contained in the scenario that constitutes the set-up of the question.
If Omega is a perfect predictor, all we have is a possible world history, where Omega at some time t appears, makes a prediction, and at some time t’ that prediction has been fulfilled. There is no call to ask a “What do you do” question. The answers are laid out in the specification of the world history.
One-boxing is the correct choice in the original problem, because we are asked to say in which of two world-histories we walk away with $1M, and given the stipulation that there exist no world-histories to choose from in wich we walk away with $1M and two boxes. We’re just led astray by the pragmatics of “What do you do?”.
[EDIT: in case it isn’t clear, and because you said you were curious what people thought the obvious answer was, I think the obvious answer is “get lost”; similarly the obvious answer to the original problem is “I take the two boxes”. The obvious answer just happens to be the incorrect choice. I have changed the paragraph just previous to say “the correct choice” instead of “the correct answer”.
Also, in the previous paragraph I assume I want the $1M, and it is that which makes one-boxing the correct choice. Of course it’s presented as a free-will question, that is, one in which more than one possible world-history is available, and so I can’t rule out unlikely worlds in which I want the $1M but mistakenly pick the wrong world-history.]
Recording an oops: when I wrote the above I didn’t really understand Newcomb’s Problem. I retract pretty much all of the above comment.
I’m now partway through Gary Drescher’s Good and Real and glad that it’s given me a better handle on Newcomb, and that I can now classify my mistake (in my above description of the “original problem”) as “evidentialist”.
I think I understand your point. A reiteration in my words:
The question “What do you do?” implies that the answer is not locked in. If a perfect predictor has made a prediction about what I will do, than the question “What do you do?” is nonsensical.
Am I close?
EDIT: No, this was not a correct interpretation of Morendil’s post. See below.
EDIT2: And it has nothing to do with what I think is true.
If you don’t know what the prediction is, it’s not nonsensical. You still have to decide what to do.
If you do know what the prediction is, then the way in which you react to that prediction determines which prediction you’ll hear. For example, if I walk up to someone and say, “I’m good at predicting people in simple problems, I’m truthful, and I predict you’ll give me $5,” they won’t give me anything. Since I know this, I won’t make that prediction. If people did decide to give me $5 in this sort of situation, I might well go around making such predictions.
You seem to be confused about free will. Keep reading the Sequences and you won’t be.
I don’t know how to respond to this or Morendil’s second comment. I feel like I am missing something obvious to everyone else but when I read explanations I feel like they are talking about a completely unrelated topic.
Things like this:
Confuse me because as far as I can tell, this has nothing to do with free will. I don’t care about free will. I care about what happens when a perfect predictor enters the room.
Is such a thing just completely impossible? I wouldn’t have expected the answer to this to be Yes.
Okay, yeah, so restrict yourself only to the situations where people will give you the $5 even though you told them the prediction. This is a good example of my frustration. I feel like your response is completely irrelevant. Experience tells me this is highly unlikely. So what am I missing? Some key component to free will? A bad definition of “perfect predictor”? What?
To me the scenario seems to be as simple as: If Omega predicts X, X will happen. If X wouldn’t have happened, Omega wouldn’t predict X.
I don’t see how including “knowledge of the prediction” into X makes any difference. I don’t see how whatever definition of free will you are using makes any difference.
“Go read the Sequences” is fair enough, but I wouldn’t mind a hint as to what I am supposed to be looking for. “Free will” doesn’t satiate my curiosity. Can you at least tell me why Free Will matters here? Is it something as simple as, “You cannot predict past a free will choice?”
As it is right now, I haven’t learned anything other than, “You’re wrong.”
I sympathize with your frustration at those who point you to references without adequate functional summaries. Unfortunately, I struggle with some of the same problems you’re asking about.
Still, I can point you to the causal map that Eliezer_Yudkowsky believes captures this problem accurately (ETA: That means Newcomb’s problem, though this discussion started off on a different one).
The final diagram in this post shows how he views it. He justifies this causal model by the constraints of the problem, which he states here.
Also, here’s my expanded, modified network to account for a few other things (click to enlarge).
ETA: Bolding was irritating, so I’ve decided to separately list what his criteria for a causal map are, given the problem statement. (The implication for the causal graph follows each one in parentheses.)
Must have nodes corresponding to logical uncertainty (Self-explanatory)
Omega’s decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected)
Omega’s act lies in the past. (Actions after Omega’s act are uncorrelated with actions before Omega’s act, once you know Omega’s act.)
Omega’s act is not directly influencing us (No causal arrow directly from Omega to us/our choice.)
We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.)
Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)
Ah, okay, thanks. I can start reading those, then.
I think the way you phrased some things in the OP and the fact that you called the post “The Fundamental Problem Behind Omega” has confused a lot of people. Afaict your position is exactly right… but the title suggests a problem. What is that problem?!
“Problem” as in “Puzzle” not “Problem” as in “Broken Piece.”
Would changing the title to Puzzle help?
So the fundamental puzzle of Omega is: what do you do if he tells you he has predicted you will give him $5?
And the answer is, “Whatever you want to do, but you want to give him $5.” I guess I’m missing the significance of all this.
Yes, but it’s also clear that that would be a non-problem. What I mean is, there is no decision to make in such a problem, because, by assumption, the “you” referred to is a “you” that will give $5. There’s no need to think about what you “would” do because that’s already known.
But likewise, in Newcomb’s problem, the same thing is happening: by assumption, there is no decision left to make. At most, I can “decide” right now, so I make a good choice when the problem comes up, but for the problem as stated, my decision has already been made.
(Then again, it sounds like I’m making the error of fatalism there, but I’m not sure.)
The problem I see is that then you (together with Omega’s prediction about you) becomes something like self-PA.
I thought it was obvious, but people are disagreeing with me, so… I don’t know what that means.
When a human brain makes a decision, certain computations take place within it and produce the result. Those computations can be perfectly simulated by a sufficiently-more-powerful brain, e.g. Omega. Once Omega has perfectly simulated you for the relevant time, he can make perfect predictions concerning you.
Perfectly simulating any computation requires at least as many resources as the computation itself (1), so AFAICT it’s impossible for anything, even Omega, to simulate itself perfectly. So a general “perfect predictor” may be impossible. But in this scenario, Omega doesn’t have to be a general perfect predictor; it only has to be a perfect predictor of you.
From Omega’s perspective, after running the simulation, your actions are determined. But you don’t have access to Omega’s simulation, nor could you understand it even if you did. There’s no way for you to know what the results of the computations in your brain will be, without actually running them.
If I recall the Sequences correctly, something like the previous sentence would be a fair definition of Eliezer’s concept of free will.
(1) ETA: On second thought this need not be the case. For example, f(x) = ( (x *10) / 10 ) +1 is accurately modeled by f(x) = x+1. Presumably Omega is a “well-formed” mind without any such rent-shirking spandrels.
Keep in mind that I might be confused about either free will or Newcomb problems.
My first comment above isn’t really intended as an explanation of Newcomb’s original problem, just an explanation of why they elicit feelings of confusion.
My own initial confusion regarding them has (I think) partly evaporated as a result of considering pragmatics, and partly too as a result of reading Julian Barbour’s book on timeless physics on top of the relevant LW sequences.
Okay. That helps, thanks.
Sounds like you might be having confusion resulting from circular mental causal models. You’ve got an arrow from Omega to X. Wrong direction. You want to reason, “If X is likely to happen, Omega will predict X.”
I believe the text you quote is intended to be interpreted as material implication, not causal arrows.
Sure. So, X implies that Omega will predict X. The four possible states of the universe:
Where
X is “You will give Omega $5 if Y happens” and
Y is “Omega appears, tells you it predicted X, and asks you for $5″:
1) X is true; Omega does Y
2) X is false; Omega does Y
3) X is true; Omega does not do Y
4) X is false; Omega does not do Y
Number two will not happen because Omega will not predict X when X is false. Omega doesn’t even appear in options 3 and 4, so they aren’t relevant. The last remaining option is:
X is true; Omega does Y. Filling it out:
X is “You will give Omega $5 if Omega appears, tells you it predicted X, and asks you for $5.”
Hmm… that is interesting. X includes a reference to X, which isn’t a problem in language, but could be a problem with the math. The problem is not as simple as putting “you will give Omega $5” in for X because that isn’t strictly what Omega is asking.
The easiest simplification is to take out the part about Omega telling you it predicted X… but that is a significant change that I consider it a different puzzle entirely.
Is this your objection?
That is an interesting math problem. And the math problem has an solution, which is called a quine). So the self-referentialness of the prediction is not by itself a sufficient objection to your scenario.
Nice, thanks.
If by locked in you mean, only a subset of all possible world states are available, then yes, your first sentence is on target.
As to the second, it’s not really a matter of the question making sense. It’s a well-formed English sentence, its meaning is clear, it can be answered, and so on.
It is just that the question will reliably induce answers which are answers to something different from the scenario as posed, in which a) Omega is understood to be a perfect predictor, and b) all the relevant facts are only the ordinary state of the world plus a). In your scenario, the answer I want to give—in fact the answer I would give—is “I tell Omega to get lost.” I would answer as if you’d asked “What do you want to answer”, or “What outcome would you prefer, if you were free to disregard the logical constraints on the scenario.”
Suppose I ask you to choose a letter string which conform to the pattern (B|Z)D?. The letter B is worth $1M and the letter D is worth $1K. You are to choose the best possible string. Clearly the possibilities are BD, ZD, B, Z. Now we prefix the strings with one letter, which gives the length of your choice: 2BD, 2ZD, 1B, 1Z.
The original Newcomb scenario boils down to this: conditional on the string not containing both 2 and B (and not containing both 1 and Z), which string choice has the highest expected value? You’re disguising this question, which has an obvious and correct answer of “1B”, as another (“What do you do”).
It doesn’t matter that 2BD has the highest expected value of all. It doesn’t matter that there seems to be a “timing” consideration, in which Omega has “already” chosen the second letter in the string, and youre “choosing” the number “afterwards”. The information that Omega is a perfect predictor is a logical constraint on the strings that you can pick from, i.e. on the “end states” that you can experience. Your “decision” has to be compatible with one of these end states.
Why? I don’t understand why the answers are disconnected from the scenario. Why isn’t all of this included in the concept of a perfect predictor?
So… what if the scenario allows for you to want to give $5? The scenario you are talking about is impossible because Omega wouldn’t have asked you in that scenario. It would have been able to predict your response and would have known better than to ask.
Hmm. Okay, that makes sense.
Are you saying that it doesn’t matter for the question, “Which string choice has the highest expected value?” or the question, “What do you do?” My guess is the latter.
Okay, but I don’t understand how this distinguishes the two questions. If I asked, “What do you do?” what am I asking? Since it’s not “Which string scores best?”
My impression was that asking, “What do you do?” is asking for a decision between all possible end states. Apparently this was a bad impression?
From a standpoint of the psychology of language, when you ask “What do you do”, you’re asking me to envision a plausible scenario—basically to play a movie in my head. If I can visualize myself two-boxing and somehow defying Omega’s prediction, my brain will want to give that answer.
When you ask “What do you do”, you’re talking to the parts of my brain who consider all of 2BD, 2ZD, 1B and 1Z as relevant possibilities (because they have been introduced in the description of the “problem”).
If you formalize first then ask me to pick one of 2ZD or 1B, after pointing out that the other possibilities are eliminated by the Omega constraint, I’m more likely to give the correct answer.
Oh. Okay, yeah, I guess I wasn’t looking for an answer in terms of “What verbal response do you give to my post?” I was looking for an answer strictly in terms of possible scenarios.
Is there a better way to convey that than “What do you do?” Or am I still missing something? Or… ?
If we agree to treat “Omega predicts X” as being equivalent to “X is true”, then “Suppose Omega predicts that you’ll give it $5” means “Suppose that you’ll give Omega $5″. Then, the question
becomes
I have no problems with that. Do you?
If whenever Omega predicts I will give it $5, I don’t give it $5, then I will never observe Omega predicting I will give it $5, which I don’t want to happen. Therefore, I don’t give the $5. If Omega makes the prediction anyways, this is a problem with Omega, not my decision.
If Omega asked you for $5 and promised you $10, would you do it?
Do you mean that Omega explains that it will give me $10 if and only if I give it $5? Then, yes, I would give it $5.
I see where this is going, and you are ignoring the conventional implicit “All else being equal”. Do you agree that Omega declaring its prediction is not what causes me to give it $5, and that making such predictions does not make the subject a money pump?
Yep. This isn’t counterfactual mugging and isn’t intended to be. The point in this post will apply to counterfactual mugging, but the information in this post will not turn the subject into a money pump.
I didn’t ignore it. I made this point in the post:
The expected response to this post is, “Well, yeah.”
I like this article, but agree the title is off. Perhaps “My Fundamental Question about Omega” or even “Omega: I Just Don’t Get It” would be more karma-encouraging. I suspect that at least some people (not me) are taking the current title to mean that you have some sort of new mathematical proof about TDT and then are voting you down in disappointment when they see this. ;-)
[Edit to add, for latecomers: the post I’m replying to was originally titled “The Fundamental Problem Behind Omega”]
Ooh, I like that much better. Thanks for the tip.
The statement also seems to be just like, “If Omega has good reason to predict that you will give it $5, you will give it $5.”
Yes.
Maul the prick with a sock filled with 500 pennies.
(a) is correct. (b) does not apply, in many cases Omega is a benefactor, but can be used in scenarios where Omega causes a net harm. The important point is that Omega is perfectly honest, the rules of the scenario are exactly what Omega says they are.
Omega is not malevolent in that it isn’t out to get you. Not malevolent is different than benevolent.
Sometimes, Omega is malevolent.
For the sake of the point in the article, claiming that Omega is not malevolent cleans up annoying, irrelevant questions. Any application of this point would only apply to non-malevolent Omegas, sure, but I am happy with that. Once we deal with the non-malevolent Omegas we can take care of the malevolent ones.
In other words, I am not trying to strictly define Omega. I am trying to find a stepping stone to solving non-malevolent Omega problems.
The reason I stated it the way I did in the article is because most of the articles using Omega include some such clause. Solving end cases helps solve all cases.
You are missing the point of Omega, which is to factor out considerations of uncertainty. Omega is a perfect predictor so that we can be certain that its predictions are accurate. Omega is perfectly honest, and explains the rules of the scenario, so that we can be certain of the rules.
We don’t have to worry about Omega’s motivations at all, because, in a proper Omega scenario, Omega’s actions in repsonse to every possible state of the scenario is exactly specified.
Right. I used the term “not malevolent” for this. What term would you have used?
“Has exactly specified behavior” would work.
Sure, that works. How about, “(b) has explicitly defined behavior.” Does that translate okay?
This may be too trivial for here, but I just watched a Derren Brown show on Channel 4. I think it’s very likely that he could do a stage show in which he plays the part of Omega and consistently guesses correctly, and if that were to happen, I’d love to know whether those who one-box or two-box when faced with Omega would make the same decision when faced with Derren Brown. I would one-box.
F = Factors that feed into your decision process.
OP = Omega’s prediction.
YD = Your decision.
F --> OP
F --> YD
Your decision does not bootstrap itself out of nothing; it is a function of F. All causality here is forwards in time. By the definition of Omega, OP and YD always match, and the causality chain is self-consistent, for a single timeline. Most confusion that I have seen around Omega or Newcomb seems to be confusion about at least one of these things.
Yeah, I agree with that.
The catch is that Omega isn’t going to show up if it predicts you aren’t going to pay. If it showed up, than it must have predicted you are going to pay.
Ooops, as soon as Omega tells you his prediction the above has to change because now there is a new element in F.
I think this is the same self-referential problem Mr. Hen calls out in this comment.
I think I agree with Sly. If Omega spilling the beans influences your decision, then it is part of F, and therefore Omega must model that. If Omega fails to predict that revealing his prediction will cause you to act contrariliy, then he fails at being Omega.
I can’t tell whether this makes Omega logically impossible or not. Anyone?
This doesn’t make Omega logically impossible unless we make him tell his prediction. (In order to be truthful, Omega would only tell a prediction that is stable upon telling, and there may not be one.)
I don’t think it makes omega logically impossible in all situations, I think it depends upon whether F-->YD (or a function based on it that can be recursively applied) has a fixed point) or not.
I’ll try and hash it out tomorrow in haskell. But now it is late. See also the fixed point combinator if you want to play along at home.
I would assume that Omega telling you his prediction was already factored into the Omega Prediction F.
I agree with that. I don’t expect a perfect predictor to make that prediction, though, but if it were made, then I’d find myself handing over the $5 for some reason or other.
Yes, you would expect that you would find yourself handing over the money. If told “Omega will soon predict that you will give him $5”, then you divide the universe into two categories—I will give over $5, or I won’t—and assign much greater probability to the first option.
But that is not a reason to give him $5 if you otherwise wouldn’t. It’s a reason to expect that there will compelling reasons to make you do it—but if these compelling reasons don’t materialise, there is no reason for you to act as if they were there.
Yes, I agree with this.
Actually if Omega literally materialized out of thin air before me, I would be amazed and consider him a very powerful and perhaps supernatural entity, so would probably pay him just to stay on his good side. Depending on how literally we take the “Omega appears” part of this thought experiment, it may not be as absurd as it seems.
Even if Omega just steps out of a taxi or whatever, some people in some circumstances would pay him. The Jim Carrey movie “Yes Man” is supposedly based on a true story of someone who decided to say yes to everything, and had very good results. Omega would only appear to such people.
I had this sitting in my drafts folder and noticed another long discussion about two-boxing versus one-boxing and realized that the next step in the conversation was similar to the point I was trying to make here.
If this post doesn’t get voted up and promoted, then please post “the next step in the conversation” as a comment here rather than its own post.
If the post doesn’t get promoted than I will not post any further thoughts on Omega, Newcomb’s or anything else related.
I wouldn’t mind getting the problem resolved, though. Do you agree with the answer given by Morendil?
“The next step in the conversation” was referring to the next step in the aforementioned conversation on two-boxing and has nothing to do with this post.
In the original statement of Newcomb’s Paradox, it was stated that Omega is “almost certainly” correct. When did Omega go from being “almost certainly” correct to an arbiter of absolute truth?
I think that’s more of a simplifying assumption. I’ve seen statements of the puzzle with varying degrees of certainty in Omega’s predictions (total, “almost certain”, 99%, etc.).
I’m pretty sure you could use, instead of Omega, a human psychologist with a 90% track record in predicting two-boxers (and predicting if you’ll use a coinflip just to tick her off). The expected value of two-boxing vs one-boxing then requires a more sophisticated calculation. But I don’t think that changes the structure of the puzzle.
I think it’s a good simplifying assumption, but I wonder how much of the confusion that results in philosophers deciding to one box is not understanding what a perfect predictor is. Are there any defenses of one boxing from people that believe Omega is a perfect predictor?
It changes the structure tremendously. A world in which Omega predicts you will give it $5 and you don’t, suddenly has a non-zero possibility.
If Omega is perfect, you may as well hand over the $5 right now. If he isn’t, you still know that most likely you will give over the $5, but you might as well wait around to see why. And the decision “I will not hand over $5” is no longer inconsistent.
That feels just like being mugged. I KNOW that eventually I will give Omega $5, but I prefer it not to happen by some unforeseeable process that may cause irreparable damage to me, like epileptic seizure or lightning strike. So I just hand over the cash. By the way, this reasoning applies regardless of Omega’s accuracy level.
Then you’re much more likely to be told this by Omega in the first place, for no better reason than that you were frightened enough to hand over the cash.
What do you mean by the likelihood of Omega saying something? You condition on something different from what I condition on, but I don’t understand what it is. Anyway, what I wrote stands even if we explicitly state that Omega does not say anything except “I am Omega. You will soon give me 5 dollars.”
He conditions on your response. It is like a simplified version of Newcombe’s paradox. You choose a decision theory, then Omega tells you to give him $5 iff your decision theory is such that you will give him $5 upon being told that. If you think the way you talked in the grandparent, then you will pay up.
tut, that’s correct, and I don’t feel bad about your conclusion at all. We have no disagreement, although I think your terminology obscures the fact that “my chosen decision theory” can in fact be a sudden, unforeseen brain hemorrhage during my conversation with Omega. So let me simply ask:
If Omega appeared right now, and said “I am Omega. You will give me 5 dollars in one minute.”, what would you actually do during that minute? (Please don’t answer that this is impossible because of your chosen decision theory. You can’t know your own decision theory.)
Of course you can’t predict any of the strange or not so strange things that could happen to you during the time, all perfectly transparent to Omega. But that’s not what I’m asking. I’m asking about your current plan.
I would try to get Omega to teach me psychology. Or just ask questions.
I would not give him anything if he would not answer.
All right, you are committed. :) At least admit that you would be frightened in the last five seconds of the minute. Does it change anything if Omega tells you in advance that it will not help you with any sort of information or goods?
I can only think about omega in far mode, so I can not predict that accurately. But I feel that I would be more curious than anything else
Good point. That’s a terrifying thought—and may be enough to get me to hand over the cash right away.
I might put the cash in one of twenty black boxes, and hand one of them over to Omega at random.
It shouldn’t feel like being mugged. All that making Omega perfect predictor does is prevent it from bugging you if you are not willing to pay $5. It means Omega will ask less not that you will pay more.
Your analysis is one-sided. Please try to imagine the situation with a one minute time limit. Omega appears, and tells you that you will give it 5 dollars in one minute. You decide that you will not give it the money. You are very determined about this, maybe because you are curious about what will happen. The clock is ticking...
The less seconds are there left from the minute, the more worried you should objectively be, because eventually you WILL hand over the money, and the less seconds are there, the more disruptive the change will be that will eventually cause you to reconsider.
Note that Omega didn’t give any promises about being safe during the one minute. If you think that e.g. causing you brain damage would be unfair of Omega, then we are already in the territory of ethics, not decision theory. Maybe it wasn’t Omega that caused the brain damage, maybe it appeared before you exactly because it predicted that it will happen to you. With Omegas, it is not always possible to disentangle cause and effect.
Whoop, sorry, I deleted the comment before you replied.
Let us assume that you will never, under any circumstances hand over $5 unless you feel good and happy and marvelous about it. Omega can easily pick a circumstance where you feel good, happy, marvelous about handing it $5. In this scenario, by definition, you will not feel mugged.
On the other hand, let us assume that you can be bullied into handing over $5 by Omega appearing and demanding $5 in one minute. If this works, which we are assuming it does, Omega can appear and get its $5. You will like you were just mugged, but the only way this can happen is if you are the sort of person that will actually hand over $5 without understanding why. Omega is a “jerk” in the sense that it made you feel like you were being mugged but this doesn’t imply anything about the scenario or Omega. It implies something about the situations in which you would hand Omega $5. (And that Omega doesn’t care about being a jerk.)
The point is this: If you made a steadfast decision to never hand Omega $5 without feeling happy about it, Omega would never ask you for $5 without making you feel happy about it. If you decide to never, ever hand over $5 while feel happy about it, than you will never see a non-mugging scenario.
Note: This principle is totally limited to the scenario discussed in the OP. This has no bearing on Newcomb’s or Counterfactual Mugging or anything else.
This is true but it doesn’t change how frequently you would give Omega $5. It changes Omega’s success rate, but only in the sense that it won’t play the game if you aren’t willing to give $5.
If A = You pay Omega $5 and O = Omega asks for $5:
p(A|O) = p(O|A) * p(A) / (p(O|A) * p(A) + p(O|~A) * p(~A))
Making Omega a perfect predictor sets p(Omega asks|You don’t pay) to 0, so p(O|~A) = 0.
p(A|O) = p(O|A) * p(A) / (p(O|A) * p(A) + 0 * p(~A))
p(A|O) = p(O|A) * p(A) / p(O|A) * p(A)
p(A|O) = 1
Therefore, p(You pay Omega $5|Omega asks for $5) is 1. If Omega asks, you will pay. Big whoop. This is a restriction on Omega asking, not on you giving.
Yes, but consider what happens when you start conditioning on the statement B=”I do not intend to give Omega $5″. If Omega is perfect, this is irrelevant; you will hand over the cash.
If Omega is not perfect, then the situation changes. Use A and O as above; then a relvant question is: how many of Omega’s errors have B (nearly all of them) versus how many of Omega’s successes have B (nearly none of them). Basically, you’re trying to estimate the relative sizes of (B&A)|O versus (B&~A)|O.
Now A|O is very large while ~A|O is very small, but (B&A)|O is tiny in A|O while (B&~A)|O makes up most of ~A|O. So I’d crudely estimate that those two sets are generally of pretty comparable size. If Omega is only wrong one in a million, I’d estimate I’d have even odds of handing him the $5 if I didn’t want to.
Right, when Omega is perfect, this isn’t really a useful distinction. The correlation between B and A is irrelevant for the odds of p(A|O). It does get more interesting when asking:
p(A|B)
p(~A|B)
p(O|B)
These are still interesting even when Omega is perfect. If, as you suggest, we look at the relationship between A, B, and O when Omega isn’t perfect, your questions are dead on in terms of what matters.
A mugger will soon come up to me with a gun and make me choose between my life and $5 for his buddy Omega. That’s my prediction.
I need to ask: Is this post wrong? Not, is this post stupid or boring or whatever. Is it wrong?
As best as I can tell, there are a handful of objections to the post itself, but there seems to be mostly agreement in its conclusion.
The two main detractors are such:
Morendil, who seems to be saying that the question, “What do you do?” will “reliably induce answers which are answers to something different from the scenario as posed.” Namely, the answer given to that question will be the same as if I had asked “What do you want to answer?”
Peter_de_Blanc, who claims that the scenario is inconsistent
There is also a general complaint that Omega is not being defined correctly, so I will leave Omega out of it.
So, without regard to how boring or uninteresting this is, is the following correct?
Given a perfect predictor (PP) who possesses the ability to accurately predict the outcome of any scenario:
If A = You pay PP $5 and
S = PP asks for $5
p(A|S) = p(S|A) * p(A) / (p(S|A) * p(A) + p(S|~A) * p(~A))
In addition, I add the restraint that the perfect predictor will never ask you for $5 if it doesn’t predict you will give it $5 when asked. This sets p(PP asks|You don’t pay) to 0, so p(S|~A) = 0.
p(A|S) = p(S|A) * p(A) / (p(S|A) * p(A) + 0 * p(~A))
p(A|S) = p(S|A) * p(A) / p(S|A) * p(A)
p(A|S) = 1
Therefore, p(You pay PP $5|PP asks for $5) is 1. The probability that you pay PP $5 given that PP just asked you for $5 is 1.
The phrasing in this comment is different than the phrasing in the original post. This is an even more simplified version of the question. Am I right?
I disagree, Omega can have various properties as needed to simplify various thought experiments, but for the purpose of Newcomb-like problems Omega is a very good predictor and may even have a perfect record but is not a perfect predictor in the sense of being perfect in principle or infallible.
If Omega were a perfect predictor then the whole dilemma inherent in Newcomb-like problems ceases to exist and that short circuits the entire point of posing those types of problems.
I voted this comment down, and would like to explain why.
Right, we don’t want people distracted by whether Omega’s prediction could be incorrect in their case or whether the solution should involve tricking Omega, etc. We say that Omega is a perfect predictor not because it’s so very reasonable for him to be a perfect predictor, but so that people won’t get distracted in those directions.
We must disagree about what is the heart of the dilemma. How can it be all about whether Omega is wrong with some fractional probability? Rather it’s about whether logic (2-boxing seems logical) and winning are at odds. Or perhaps whether determinism and choice is at odds, if you are operating outside a deterministic world-view. Or perhaps a third thing, but nothing—in this problem—about what kinds of Omega powers are reasonable or possible. Omega is just a device being used to set up the dilemma.
My difficulty is in understanding why the concept of a perfect predictor is relevant to artificial intelligence.
Also, 2-boxing is indicated by inductive logic based on non-Omega situations. Given the special circumstances of Newcomb’s problem, it would seem unwise to rely on that. Deductive logic leads to 1-boxing.
You don’t need perfect prediction to develop an argument for one-boxing. If the predictor’s probability of correct prediction is p and the utility of the contents of the one-box is k times the utility of the contents of the two-box, then the expected utility of one-boxing is greater than that of two-boxing if p is greater than (k + 1) / 2k.
I agree that in general this is how it works. It’s rather like POAT that way… some people see it as one kind of problem, and other people see it as another kind of problem, and neither side can make sense of the other’s position.
I’ve heard this sentiment expressed a fair bit, but I think I understand the argument for two-boxing perfectly, even though I’d one-box.
POAT?
Plane on a treadmill. (I’d pull out LMGTFY again, but I try to limit myself to one jerk-move per day.)
Er, did you actually Google it before saying that? For me it’s not even defined that way on the front page.
Yep. For me the first link (at work, second link now at home) is urbandictionary.com, and it’s the second definition.
I don’t think it counts as a matter for LMGTFY unless the answer pretty much screams at you on the results page before you even start clicking the links...
I personally ask for a link if two minutes of Googling and link-clicking gets me nothing; my standard for LMGTFY follows as a corollary.
Making the assumption that the person you’re responding to hasn’t invested those two minutes can be risky, as the present instance shows. Maybe they have, but got different results.
Another risky assumption is that the other person is using the same Google that you are using. By default the search bar in Firefox directs me to the French Google (I’ve even looked for a way to change that, without success).
So you could end up looking like an ass, rather than a jerk, when you pull a LMGTFY and the recipient still doesn’t see what you’re seeing. It only works as a status move if you’re confident that most search options and variations will still pull up the relevant result.
More importantly, this is yet another data point in favor of the 10x norm. Unless of course we want LW to be Yet Another Internet Forum (complete with avatars).
(ETA: yes, in the comment linked here the 10X norm is intended to apply to posts, not comments. I favor the stronger version that applies to comments as well: look at the length of this comment thread, infer the time spent writing these various messages, the time wasted by readers watching Recent Comments, and compare with how long it would have taken to spell it out.)
’Strue. Those occurred to me about five minutes after I first replied to ciphergoth, when the implications of the fact that the link position changed based on where I was when I Googled finally penetrated my cerebral cortex. I considered noting it in an ETA, but I didn’t expect the comment thread to continue as far as it has.
Oh, note also that Cyan’s first use of LMGTFY was I think legit—finding my blog through Google is pretty straightforward from my username.
I don’t think it’s fair to count the meta-discussion against Cyan when weighing this up. Anything can spark meta-discussion here.
If it takes two full minutes for my readership to find out what the terms mean, the onus is on me to link to it; if that only takes me three minutes and it saves two readers Googling, then it’s worth it. The LMGTFY boundary is closer to ten seconds or less.
Another option would have been to spell it out—that way a lot of readers would have known without Googling, and those who didn’t would have got answers right away.
I don’t disagree with this. My “corollary” comment above was too facile—when I recall my own behavior, it’s my standard for peevishly thinking LMGTFY, not actually linking it.
First, thanks for explaining your down vote and thereby giving me an opportunity to respond.
The problem is that it is not a fair simplification, it disrupts the dilemma in such a way as to render it trivial. If you set the accuracy of the prediction to %100 many of the other specific details of the problem become largely irrelevant. For example you could then put $999,999.99 into box A and it would still be better to one-box.
It’s effectively the same thing as lowering the amount in box A to zero or raising the amount in box B to infinity. And one could break the problem in the other direction by lowering the accuracy of the prediction to %50 or equalizing the amount in both boxes.
It’s because the probability of a correct prediction must be between %50 and %100 or it breaks the structure of the problem in the sense that it makes the answer trivial to work out.
I suppose it is true that some people have intuitions that persist in leading them astray even when the probability is set to %100. In that sense it may still have some value if it helps to isolate and illuminate these biases.
My objection here doesn’t have to do with whether it is reasonable for Omega to possess such powers but with the over-simplification of the dilemma to the point where it is trivial.
I see we really are talking about different Newcomb “problem”s. I took back my down vote. So one of our problems should have another name, or at least a qualifier.
I don’t think Newcomb’s problem (mine) is so trivial. And I wouldn’t call belief in the triangle inequality a bias.
The contents of box 1 = (a>=0)
The contents of box 2 = (b>=0)
2-boxing is the logical deduction that ((a+b)>=a) and ((a+b)>=b).
I do 1-box, and do agree that this decision is a logical deduction. I find it odd though that this deduction works by repressing another logical deduction and don’t think I’ve ever see this before. I would want to argue that any and every logical path should work without contradiction.
Perhaps I can clarify: I specifically intended to simplify the dilemma to the point where it was trivial. There are a few reasons for this, but the primary reason is so I can take the trivial example expressed here, tweak it, and see what happens.
This is not intended to be a solution to any other scenario in which Omega is involved. It is intended to make sure that we all agree that this is correct.
I’m finding “correct” to be a loaded term here. It is correct in the sense that your conclusions follow from your premises, but in my view it bears only a superficial resemblance to Newcomb’s problem. Omega is not defined the way you defined it in Newcomb-like problems and the resulting difference is not trivial.
To really get at the core dilemma of Newcomb’s problem in detail one needs to attempt to work out the equilibrium accuracy (that is the level of accuracy required to make one-boxing and two-boxing have equal expected utility) not just arbitrarily set the accuracy to the upper limit where it is easy to work out that one-boxing wins.
I don’t care about Newcomb’s problem. This post doesn’t care about Newcomb’s problem. The next step in this line of questioning still doesn’t care about Newcomb’s problem.
So, please, forget about Newcomb’s problem. At some point, way down the line, Newcomb’s problem may show up again, but when it does this:
Will certainly be taken into account. Namely, it is exactly because the difference is not trivial that I went looking for a trivial example.
The reason you find “correct” to be loaded is probably because you are expecting some hidden “Gotcha!” to pop out. There is no gotcha. I am not trying to trick you. I just want an answer to what I thought was a simple question.
First, thanks for explaining your down vote and thereby giving me an opportunity to respond.
The problem is that it is not a fair simplification, it disrupts the dilemma in such a way as to render it trivial. If you set the accuracy of the prediction to %100 many of the other specific details of the problem become largely irrelevant. For example you could then put $999,999.99 into box A and it would still be better to one-box.
It’s effectively the same thing as lowering the amount in box A to zero or raising the amount in box B to infinity. And one could break the problem in the other direction by lowering the accuracy of the prediction to %50 or equalizing the amount in both boxes.
It’s because the probability of a correct prediction must be between %50 and %100 or it breaks the structure of the problem in the sense that it makes the answer trivial to work out.
I suppose it is true that some people have intuitions that persist in leading them astray even when the probability is set to %100. In that sense it may still have some value if it helps to isolate and illuminate these biases.
My objection here doesn’t have to do with whether it is reasonable for Omega to possess such powers but with the over-simplification of the dilemma to the point where it is trivial.
I agree. A perfect predictor is either Laplace’s Demon or a supernatural being. I don’t see why either concept is particularly useful for a rationalist.
I would recommend skipping ahead in the sequences to http://wiki.lesswrong.com/wiki/Free_will_(solution)
The wiki tells me I should try to solve the problem on my own. I assume that this is a serious request, so I will not read through that sequence yet.
Then you can read the set-up post, at least. http://lesswrong.com/lw/of/dissolving_the_question/
I would TRY not to give omega $5, just to see what would happen.
Omega obviously knows this about you.
Are you postulating that Omega never lies? You didn’t mention this in your post, but without it your problem is trivial.
If Omega never lies, and if Omega makes all predictions by running perfect simulations, then the scenario you gave is inconsistent. For Omega to predict that you will give it $5 after being told that you will give it $5, it must run a simulation of you in which it tells you that it has predicted that you will give it $5. But since it runs this simulation before making the prediction, Omega is lying in the simulation.
No, I assumed not malevolent would cover that, but I guess it really doesn’t. I added a clause to explicitly point out that Omega isn’t lying.
I don’t understand this. Breaking it down:
Omega predicts I will give it $5
Omega appears and tells me it predicted I will give it $5
Telling me about the prediction implies that the telling was part of the original prediction
If the telling was part of the original prediction, than it was part of a simulation of future events
The simulation involves Omega telling me but...
This is where I lose the path. But what? I don’t understand where the lie is. If I translate this to real life:
I predict Sally will give me $5
I walk up to Sally and tell her I predict she will give me $5
I then explain that she owes me $5 and she already told me she would give me the $5 today
Sally gives me $5 and calls me weird
Where did I lie?
Omega predicts I will give it $5
Omega appears and tells me it predicted I will give it $5
Omega tells me why I will give it $5
I give Omega $5
I don’t see how including the prediction in the prediction is a lie. It is completely trivial for me, a completely flawed predictor, to include a prediction in its own prediction.
Essentially:
No he isn’t, because the simulation is assuming that the statement will be made in the future. Thinking, “Tomorrow, I will say it is Thursday,” does not make me a liar today. You can even say, “Tomorrow, I will say it is today,” and not be lying because “today” is relative to the “tomorrow” in the thought.
Omega saying, “I predict you will act as such when I tell you I have predicted you will act as such,” has no lie.
The simulated Omega says, “I have predicted blah blah blah,” when Omega has made no such prediction yet. That’s a lie.
Omega doesn’t have to simulate people. It just has to know. For example, I know that if Omega says to you “Please accept a million dollars” you’ll take it. I didn’t have to simulate you or Omega to know that.
No it isn’t because the simulated Omega will be saying that after the prediction was made.
When the simulated Omega says “I” it is referring to the Omega that made the prediction.
If Omega runs a simulation for tomorrow that includes it saying, “Today is Thursday,” the Omega in the simulation is not lying.
If Omega runs a simulation that includes it saying, “I say GROK. I have said GROK,” the simulation is not lying, even if Omega has not yet said GROK. The “I” in “I have said” is referring to the Omega of the future. The one that just said GROK.
If Omega runs a simulation that includes it doing X and then saying, “I have done X.” there is no lie.
If Omega runs a simulation that includes it predicting an event and then saying, “I have predicted this event,” there is no lie.
Does the simulated Omega runs its own simulation in order to make its prediction? And does that simulation run its own simulation too?
Either way, I don’t see a lie.
If Omega runs a simulation in some cases (say, due to insufficiency of lesser predictive techniques), and in some of those cases the simulated individual tells Omega to buzz off, has Omega lied to those simulated individuals? (I phrase this as a question because I haven’t been closely following your reasoning, so I’m not arguing for or against anything you’ve written so far—it’s a genuine inquiry, not rhetoric.)
Omega has to make prediction of your behaviour, so it has to simulate you, not itself. Your decision arguments are simulated inside Omega’s processor, with input “Omega tells that it predicts X”. There is no need for Omega to simulate its own decision process, since it is completely irrelevant to this scenario.
In an analogy, I can “simulate” the physics of boling water to predict that if I put my hand in, the water will cool down few degrees, even if I know that I will not put my hand in. I don’t have to simulate a copy of myself which actually puts the hand in, and so you can’t use my prediction to falsify the statement “I never harm myself”.
Of course, if Omega simulates itself, it may run in all sorts of self-referential problems, but that isn’t the point of Omega, and has nothing to do with “Omega never lies”.
I used the phrase “simulated individual”; it was MrHen who was talking about Omega simulating itself, not me. Shouldn’t this reply descend from that comment?
Probably it should, but I was unable (too lazy) to trace the moment where the idea of Omega simulating himself first appeared. Thanks for correction.
This isn’t strictly true.
But I agree with the rest of your point.
It’s true by hypothesis in my original question. It’s possible we’re talking about an empty case—perhaps humans just aren’t that complicated.
Yep. I am just trying to make the distinction clear.
Your question relates to prediction via simulation.
My original point makes no assumption about how Omega predicts.
In the above linked comment, EY noted that simulation wasn’t strictly required for prediction.
We are in violent agreement.
Very clever. The statement “Omega never lies.” is apparently much less innocent than it seems. But I don’t think there is such a problem with the statement “Omega will not lie to you during the experiment.”
I would say no.
Why would you say such a weird thing?
What do you mean?
I’m sorry. :) I mean that it is perfectly obvious to me that in Cyan’s thought experiment Omega is indeed telling a falsehood to the simulated individuals. How would you argue otherwise?
Of course, the simulated individual has an information disadvantage: she does not know that she is inside a simulation. This permits Omega many ugly lawyery tricks. (“Ha-ha, this is not a five dollar bill, this is a SIMULATED five dollar bill. By the way, you are also simulated, and now I will shut you down, cheapskate.”)
Let me note that I completely agree with the original post, and Cyan’s very interesting question does not invalidate your argument at all. It only means that the source of Omega’s stated infallibility is not simulate-and-postselect.
I didn’t see Cyan’s question as offering any particular position so I didn’t feel obligated to give a reason more thorough than what I wrote elsewhere in the thread.
Omega isn’t assigned the status of Liar until it actually does something. I can imagine myself lying all the time but this doesn’t mean that I have lied. When Omega simulates itself, it can simulate invalid scenarios and then check them off the list of possible outcomes. Since Omega will avoid all scenarios where it will lie, it won’t actually lie. This doesn’t mean that it cannot simulate what would happen if it did lie.
Simulating somebody is doing something, especially from the point of view of the simulated. (Note that in Cyan’s thought experiment she has a consciousness and all.)
We postulated that Omega never lies. The simulated consciousness hears a lie. Now, as far as I can see, you have two major ways out of the contradiction. The first is that it is not Omega that does this lying, but simulated-Omega. The second is that lying to a simulated consciousness does not count as lying, at least not in the real world.
The first is perfectly viable, but it highlights what for me was the main take-home message from Cyan’s thought experiment: That “Omega never lies.” is harder to formalize than it appears.
The second is also perfectly viable, but it will be extremely unpopular here at LW.
Perhaps I am not fully understanding what you mean by simulation. If I create a simulation, what does this mean?
In this context, something along the lines of whole brain emulation.
The simulated prediction doesn’t need to be accurate. Omega just doesn’t make the prediction to the real you if it is proven inaccurate for the simulated you.
In this sort of scenario, the prediction is not interesting, because it does not affect anything. The subject would give the $5 whether the prediction was made or not.
It doesn’t matter if the prediction is interesting. The prediction is accurate.
This comment is directly addressing the statement:
By “the prediction is not interesting”, I mean that it does not say anything about predictions, or general scenarios involving Omega. It does not illustrate any problem with Omega.
Okay. To address this point I need to know what, specifically, you were referring to when you said, “this sort of scenario.”
I mean, when Omega has some method, independant of declaring predictions about it, of convincing the subject to give it $5, so it appears, declares the prediction, and then proceeds to use the other method.
Omega isn’t using mind-control. Omega just knows what is going to happen. Using the prediction itself as an argument to give you $5 is a complication on the question that I happen to be addressing.
In other words, it doesn’t matter why you give Omega $5.
I said this in the original post:
All of the Omega scenarios are more complicated than the one I am talking about. That, exactly, is why I am talking about this one.
In the other Omega scenarios, the predictions are an integral part of the scenario. Remove the prediction and the whole thing falls apart.
In your scenario, the prediction doesn’t matter. Remove the prediction, and everything else is exactly the same.
It is therefore absurd that you think your scenario says something about the other beecause they all involve predictions.
The specific prediction isn’t important here, but the definition of Omega as a perfect predictor sure is important. This is exactly what I wanted to do: Ignore the details of the prediction and talk about Omega.
Removing the prediction entirely would cause the scenario to fall apart because then we could replace Omega with anything. Omega needs to be here and it needs to be making some prediction. The prediction itself is a causal fact only in the sense that Omega wouldn’t appear before you if it didn’t expect to get $5.
It’s a tautology, and that is my point. The only time Omega would ever appear is if its request would be granted.
In my opinion, it is more accurate to say that the reason behind your action is completely irrelevant. It doesn’t matter that the reason isn’t the prediction itself that is causing you to give Omega $5.
It isn’t really absurd. Placing restrictions on the scenario will cause things to go crazy and it is this craziness that I want to look at.
People still argue about one-boxing. The most obvious, direct application of this post is to show why one-boxing is the correct answer. Newcomb’s problem is actually why I ended up writing this. Every time I started working on the math behind Newcomb’s I would bump into the claim presented in this post and realize that people were going to object.
So, instead of talking about this claim inside of a post on Newcomb’s, I isolated it and presented it on its own. And people still objected to it, so I am glad I did this.
I don’t think omega is a perfect predictor or benevolent. (edit : or neutral/‘not malevolent’. he may well be malevolent, but a million dollars is a million dollars. :-)
Omega doesn’t lie and is very powerful and smart. Sometimes he predicts wrongly. He only says something will happen if he is certain in his prediction. If he is at all uncertain, he will say he predicted. (he may also say he predicted when he is certain, as that is true.
“Perfect predictor” leads us somewhat astray. “Bloody good predictor” would be enough (same reason to avoid probabilites 1 and 0, except as a shorthand).
Then if Omega shows up and predicts you will give it $5, and you don’t feel like it, then don’t. Omega made a mistake—which is possible, as he’s only nearly perfect.
Could Omega microwave a burrito so hot, that he himself could not eat it?
and my personal favorite: http://www.smbc-comics.com/index.php?db=comics&id=1778#comic
There is no “Omega” so why are you wasting time on this question?
In the future, the FAI we build may well encounter the “F”AI of another civilization. When it does, if FAI determines that “F”AI can predict FAI’s decisions (regardless of vice versa), we want FAI to make the right decisions.
For what it’s worth, my knowledge of physics tells me the following:
If it’s possible to transmit information back in time, physical laws probably nevertheless have at least one consistent solution, i.e. paradoxes are impossible. This comes from something I read about how this is the case in a billiard-ball computer (researchers built a NOT gate and put it across a wormhole; the result was that an invalid logic value came out of the wormhole and the NOT gate left this invalid value unaffected), and the fact that quantum computers use only unitary transformations, which… tend to have fixed points, at least.
It is probably not possible to transmit information back in time.
Omega’s predictions are based on applied determinism (observing the current state and calculating the future), not time travel.
Isn’t the possibility of perfectly predicting the future pretty much the same as the possibility to transmit things back in time? Or maybe we’re not predicting the future at all, in which case… hmm.
No. The arbitrary ability to transmit things back in time can be used to set up paradoxes. Predictions, on the other hand, can be innacurate, describe counterfactual futures, or be poorly specified, but they do not result in paradoxes.