If Omega is omniscient the two actions (one- and two-boxing) each have a certain outcome with the probability of 1. So you just pick the better outcome. If Omega is just a skilled predictor, there is no certain outcome so you two-box.
You are facing a modified version of Newcomb’s Problem, which is identical to standard Newcomb except that Omega now has 99% predictive accuracy instead of ~100%. Do you one-box or two-box?
You get $1000 with 99% probability and $1001000 with 1% probability, for a final expected value of $101090. A one-boxer gets $1000000 with 99% probability and $0 with 1% probability, with a final expected value of $990000. Even with probabilistic uncertainties, you would still have been comparatively better off one-boxing. And this isn’t just limited to high probabilities; theoretically any predictive power better than chance causes Newcomb-like situations.
In practice, this tends to go away with lower predictive accuracies because the relative rewards aren’t high enough to justify one-boxing. Nevertheless, I have little to no trouble believing that a skilled human predictor can reach accuracies of >80%, in which case these Newcomb-like tendencies are indeed present.
You get $1000 with 99% probability and $1001000 with 1% probability, for a final expected value of $101090. A one-boxer gets $1000000 with 99% probability and $0 with 1% probability, with a final expected value of $990000.
No, I don’t think so.
Let’s do things in temporal order.
Step 1: Omega makes a prediction and puts money into boxes.
Assuming you are a two-boxer, there is a 99% chance that there is nothing in Box B (and $1000 in Box A, as always), along with a 1% chance that Box B contains $1000000. If we’re going with the most likely scenario, there is nothing in Box B.
In the classic Newcomb’s Problem Omega moves first before I can do anything. Step 1 happens before I made any choices.
If Omega is a good predictor, he’ll predict my decision, but there is nothing I can do about it. I don’t make a choice to be a “two-boxer” or a “one-boxer”.
I can make a choice only after step 1, once the boxes are set up and unchangeable. And after step 1 everything is fixed so you should two-box.
In the classic Newcomb’s Problem Omega moves first before I can do anything. Step 1 happens before I made any choices.
This is true both for the 99% and 100% accurate predictor, isn’t it? Yet you say you one-box with the 100% one.
I can make a choice only after step 1, once the boxes are set up and unchangeable. And after step 1 everything is fixed so you should two-box.
Please answer me this:
What does 99% accuracy mean to you exactly, in this scenario? If you know that Omega can predict you with 99% accuracy, what reality does this correspond to for you? What do you expect to happen different, compared to if he could predict you with, say, 50% accuracy (purely chance guesses)?
Actually, let’s make it more specific: suppose you do this same problem 1000 times, with a 99% Omega, what amount of money do you expect to end up with if you two-box? And what if you one-box?
The reason I am asking is that it appears to me like, the moment Omega stops being perfectly 100% accurate, you really stop believing he can predict you at all. It’s like, if you’re given a Newcomblike problem that involves “Omega can predict you with 99% accuracy”, you don’t actually accept this information (and are therefore solving a different problem).
It’s unsafe to guess at another’s thoughts, and I could be wrong. But I simply fail to see, based on the things you’ve said, how the “99% accuracy” information informs your model of the situation at all.
This is true both for the 99% and 100% accurate predictor, isn’t it? Yet you say you one-box with the 100% one.
Yes, because 100% is achievable only through magic. Omniscience makes Omega a god and you can’t trick an omniscient god.
That’s why there is a discontinuity between P=1 and P=1-e—we leave the normal world and enter the realm of magic.
What does 99% accuracy mean to you exactly, in this scenario? If you know that Omega can predict you with 99% accuracy, what reality does this correspond to for you?
In the frequentist framework this means that if you were to fork the universe and make 100 exact copies of it, in 99 copies Omega would be correct and in one of them he would be wrong.
In the Bayesian framework probabilities are degrees of belief and the local convention is to think of them as betting odds, so this means I should be indifferent which side to take of a 1 to 99 bet on the correctness of Omega’s decision.
suppose you do this same problem 1000 times, with a 99% Omega, what amount of money do you expect to end up with if you two-box? And what if you one-box?
The question is badly phrased because it ignores the temporal order and so causality.
If you become omniscient for a moment and pick 1000 people who are guaranteed to two-box and 1000 people who are guaranteed to one-box, the one-box people will, of course, get more money from a 99% Omega. But it’s not a matter of their choice, you picked them this way.
the moment Omega stops being perfectly 100% accurate, you really stop believing he can predict you at all
Not at all. I mentioned this before and I’ll repeat it again: there is no link between Omega’s prediction and the choice of a standard participant in the Newcomb’s Problem. The standard participant does not have any advance information about Omega with his boxes and so cannot pre-commit to anything. He only gets to do something after the boxes become immutable.
At the core, I think, the issue is of causality and I’m not comfortable with the acausal manoeuvres that LW is so fond of.
I asked what it means to you. Not sure why I got an explanation of bayesian vs frequentist probability.
there is no link between Omega’s prediction and the choice of a standard participant in the Newcomb’s Problem. The standard participant does not have any advance information about Omega with his boxes and so cannot pre-commit to anything.
You seem to believe precommitment is the only thing that makes your choice knowable to Omega in advance. But Omega got his track record of 99% accurate predictions somehow. Whatever algorithms are ultimately responsible for your choice, they—or rather their causal ancestors—exist in the world observed by Omega at the time he’s filling his boxes. Unless you believe in some kind of acausal choicemaking, you are just as “committed” if you’d never heard of Newcomb’s problem. However, from within the algorithms, you may not know what choice you’re bound to make until you’re done computing. Just as a deterministic chess playing program is still choosing a move, even if the choice in a given position is bound to be, say, Nf4-e6.
Indeed, your willingness (or lack thereof) to believe that, whatever the output of your thinking, Omega is 99% likely to have predicted it, is probably going to be a factor in Omega’s original decision.
To me personally? Pretty much nothing, an abstract exercise with numbers. As I said before (though the post was heavily downvoted and probably invisible by now), I don’t expect to meet Omega and his boxes in my future, so I don’t care much, certainly not enough to pre-commit.
Or are you asking what 1% probability means to me? I suspect I have a pretty conventional perception of it.
You seem to believe precommitment is the only thing that makes your choice knowable to Omega in advance.
No, that’s not the issue. We are repeating the whole line of argument I went through with dxu and TheOtherDave during the last couple of days—see e.g. this and browse up and down this subthread. Keep in mind that some of my posts there were downvoted into invisibility so you may need to click on buttons to open parts of the subthread.
Sigh. I wasn’t asking if you care. I meant more something like this:
But NASA told Mr. Ullian that the probability of failure was more like 1 in 10^5. [...] “That means you could fly the shuttle every day for an average of 300 years between accidents — every day, one flight, for 300 years — which is obviously crazy!”
Feynman doesn’t believe the number, but this is what it means to him: if he were to take the number seriously, this is the reality he thinks it would correspond to. That’s what I meant when I asked “what does this number mean to you”. What reality the “99% accuracy” (hypothetically) translates to for you when you consider the problem. What work it’s doing in your model of it, never mind if it’s a toy model.
Suppose you—or if you prefer, any not-precommitted participant—faces Omega, who presents his already-filled boxes, and the participant chooses to either one-box or two-box. Does the 99% accuracy mean you expect to afterwards find that Omega predicted that choice in 99 out of 100 cases on average? If so, can you draw up expected values for either choice? If not, how else do you understand that number?
the whole line of argument I went through with dxu and TheOtherDave during the last couple of days
OK, I re-read it and I think I see it.
the optimal choice for me after Stage 1 is to two-box
I think the issue lies in this “after” word. If determinism, then you don’t get to first have a knowable-to-Omega disposition to either one-box or two-box, and then magically make an independent choice after Omega fills the boxes. The choice was already unavoidably part of the Universe before Stage 1, in the form of its causal ancestors, which are evidence for Omega to pick up to make his 99% accurate prediction. (So the choice affected Omega just fine, which is why I am not very fond of the word “acausal”). The standard intuition that places decisionmaking in some sort of a causal void removed from the rest of the Universe doesn’t work too well when predictability is involved.
Yep, that’s another way to look at causality issue. I asked upthread if the correctness of the one-boxing “solution” implies lack of free will and, in fact, depends on the lack of free will. I did not get a satisfying answer (instead I got condescending nonsense about my corrupt variation of an ideal CDT agent).
If “the choice was already unavoidably part of the Universe before Stage” then it is not a “choice” as I understand the word. In this case the whole problem disappears since if the choice to one-box or two-box is predetermined, what are we talking about, anyway?
As is often the case, Christianity already had to deal with this philosophical issue of a predetermined choice—see e.g. Calvinism.
Still wouldn’t mind getting a proper answer to my question...
And well, yeah, if you believe in a nondeterministic, acausal free will, then we may have an unbridgable disagreement. But even then… suppose we put the issue of determinism and free will completely aside for now. Blackbox it.
Imagine—put on your “take it seriously” glasses for a moment if I can have your indulgence—that a sufficiently advanced alien actually comes to Earth and in many, many trials establishes a 99% track record of predicting people’s n-boxing choices (to keep it simple, it’s 99% for one-boxers and also 99% for two-boxers).
Imagine also that, for whatever reason, you didn’t precommit (maybe sufficiently reliable precommitment mechanisms are costly and inconvenient, inflation ate into the value of the prize, and the chance of being chosen by Omega for participation is tiny. Or just akrasia, I don’t care). And then you get chosen for participation and accept (hey, free money).
What now? Do you have a 99% expectation that, after your choice, Omega will have predicted it correctly? Does that let you calculate expected values? If so, what are they? If not, in what way are you different from the historical participants who amounted to the 99% track record Omega’s built so far (= participants already known to have found themselves predicted 99% of the time)?
Or are you saying that an Omega like that can’t exist in the first place. In which case how is that different—other than in degree—from whenever humans predict other humans with better than chance accuracy?
But let me ask that question again, then. Does the correctness of one-boxing require determinism, aka lack of free will?
Does that let you calculate expected values?
Let’s get a bit more precise here. There are two ways you can use this term. One is with respect to the future, to express the probability of something that hasn’t happened yet. The other is with respect to lack of knowledge, to express that something already happened, but you just don’t know what it is.
The meanings conveyed by these two ways are very different. In particular, when looking at Omega’s two boxes, there is no “expected value” in the first sense. Whatever happened already happened. The true state of nature is that one distribution of money between the boxes has the probability of 1 -- it happened—and the other distribution has the probability of 0 -- it did not happen. I don’t know which one of them happened, so people talk about expected values in the sense of uncertainty of their beliefs, but that’s quite a different thing.
So after Stage 1 in reality there are no expected values of the content of the boxes—the boxes are already set and immutable. It’s only my knowledge that’s lacking. And in this particular setting it so happens that I can make my knowledge not matter at all—by taking both boxes.
You approach also seems to have the following problem. Essentially, Omega views all people as divided into two classes: one-boxers and two-boxers. If belonging to such class is unchangeable (see predestination), the problem disappears since you can do nothing about it. However if you can change which class you belong to (e.g. before the game starts), you can change it after Stage 1 as well. So the optimal solution looks to be to get yourself into the one-boxing class before the game, but the, once Stage 1 happens, switch to the two-boxing class. And if you can’t pull off this trick, well, why do you think you can change classes at all?
Does the correctness of one-boxing require determinism, aka lack of free will?
I don’t think so, which is the gist of my last post—I think all it requires is taking Omega’s track record seriously. I suppose this means I prefer EDT to CDT—it seems insane to me to ignore evidence, past performance showing that 99% of everyone who’s two-boxed so far got out with much less money.
Essentially, Omega views all people as divided into two classes: one-boxers and two-boxers.
No more than a typical coin is either a header or a tailer. Omega can simply predict with high accuracy if it’s gonna be heads or tails on the next, specific occasion… or if it’s gonna be one or two boxes, already accounting for any tricks. Imagine you have a tell, like in poker, at least when facing someone as observant as Omega.
All right, I’m done here. Trying to get a direct answer to my question stopped feeling worthwhile.
Does the correctness of one-boxing require determinism, aka lack of free will?
The fact that you think these things are the same thing is the problem. Determinism does not imply lack of “choice”, not in any sense that matters.
To be absolutely clear:
No, one-boxing does not require lack of free will.
But it should also be obvious that for omega to predict you requires you to be predictable. Determinism provides this for the 100% accurate case. This is not any kind of contradiction.
If belonging to such class is unchangeable (see predestination), the problem disappears since you can do nothing about it.
No “changing” is required. You can’t “change” the future any more than you can “change” the past. You simply determine it. Whichever choice you decide to make is the choice you were always going to make, and determines the class you are, and always were in.
Just so I’m clear: when you call that a lack of choice, do you mean to distinguish it from anything? That is, is there anything in the world you would call the presence of choice? Does the word “choice,” for you, have a real referent?
Does the word “choice,” for you, have a real referent?
Sure. I walk into a ice cream parlour, which flavour am I going to choose? Can you predict? Can anyone predict with complete certainty? If not, I’ll make a choice.
This definition of choice is empty. If I can’t predict which flavour you will buy based on knowing what flavours you like or what you want, you aren’t choosing in any meaningful sense at all. You’re just arbitrarily, capriciously, picking a flavour at random. Your “choice” doesn’t even contribute to your own benefit.
If it’s delicious, then any observer who knows what you consider delicious could have predicted what you chose. (Unless there are a few flavours that you deem exactly equally delicious, in which case it makes no difference, and you are choosing at random between them.)
in which case it makes no difference, and you are choosing at random between them
Oh, no, it does make a difference for my flavour preferences are not stable and depend on a variety of things like my mood, the season, the last food I ate, etc. etc.
Whichever choice you decide to make is the choice you were always going to make, and determines the class you are, and always were in.
Yes, I understand that.
So then just decide to one-box. You aren’t something outside of physics; you are part of physics and your decision is as much a part of physics as anything else. Your decision to one-box or two-box is determined by physics, true, but that’s not an excuse for not choosing! That’s like saying, “The future is already set in stone; if I get hit by a car in the street, that’s what was always going to happen. Therefore I’m going to stop looking both ways when I cross the street. After all, if I get hit, that’s what physics said was going to happen, right?”
Choosing is deliberation, deliberation is choosing. Just consider the alternatives (one-box, two-box) and do the one that results in you having more money.
Whichever choice you decide to make is the choice you were always going to make
The keyword here is decide. Just because you were always going to make that choice doesn’t mean you didn’t decide. You weighed up the costs and benefits of each option, didn’t you?
It really isn’t hard. Just think about it, then take one box.
Choosing is deliberation, deliberation is choosing. Just consider the alternatives (one-box, two-box) and do the one that results in you having more money.
Clearly thats two boxing. Omega already made his choice, so if he thought I’d two box, I’ll get;
-One box: nothing
-two boxing: the small reward
if Omega thought I’d one box:
-One box:big reward
-two box: big reward + small reward
Two boxing results in more money no matter how Omega thought I’d chose.
What if I try to predict what Omega does, and do the opposite?
That would mean that either 1) there are some strategies I am incapable of executing, or 2) Omega can’t in principle predict what I do, since it is indirectly predicting itself.
Alternatively, what if instead of me trying to predict Omega, we run this with transparent boxes and I base my decision on what I see in the boxes, doing the opposite of what Omega predicted? Again, Omega is indirectly predicting itself.
I don’t see how this is relevant, but yes, in principle it’s impossible to predict the universe perfectly. On account of the universe + your brain is bigger than your brain. Although, if you live in a bubble universe that is bigger than the rest of the universe, whose interaction with the rest of the universe is limited precisely to your chosen manipulation of the connecting bridge; basically, if you are AIXI, then you may be able to perfectly predict the universe conditional on your actions.
This has pretty much no impact on actual newcomb’s though, since we can just define such problems away by making omega do the obvious thing to prevent such shenanigans (“trolls get no money”). For the purpose of the thought experiment, action-conditional predictions are fine.
IOW, this is not a problem with Newcomb’s. By the way, this has been discussed previously.
You’ve now destroyed the usefulness of Newcomb as a potentially interesting analogy to the real world. In real world games, my opponent is trying to infer my strategy and I’m trying to infer theirs.
If Newcomb is only about a weird world where omega can try and predict the player’s actions, but the player is not allowed to predict omega’s, then its sort of a silly problem. Its lost most of its generality because you’ve explicitly disallowed the majority of strategies.
If you allow the player to pursue his own strategy, then its still a silly problem, because the question ends up being inconsistent (because if omega plays omega, nothing can happen).
In real world games, we spend most our time trying to make action-conditional predictions. “If I play Foo, then my opponent will play Bar”. There’s no attempting to circularly predict yourself with unconditional predictions. The sensible formulation of Newcomb’s matches that.
(For example, transparent boxes: Omega predicts “if I fill both boxes, then player will ___” and fills the boxes based on that prediction. Or a few other variations on that.)
In many (probably most?) games we consider the opponents strategy, not simply their next move. Making moves in an attempt to confuse your opponent’s estimation of your own strategy is a common tactic in many games.
Your “modified Newcomb” doesn’t allow the chooser to have a strategy- they aren’t allowed to say “if I predict Omega did X, I’ll do Y.” Its a weird sort of game where my opponent takes my strategy into account, but something keeps me from considering my opponents.
Can’t Omega follow the strategy of ‘Trolls get no money,’ which by assumption is worse for you? I feel like this would result in some false positives, but perhaps not—and the scenario says nothing about the people who don’t get to play in any case.
No, because that’s fighting the hypothetical. Assume that he doesn’t do that.
It is actually approximately the opposite of fighting the hypothetical. It is managing the people who are trying to fight the hypothetical. Precise wording of the details of the specification can be used to preempt such replies but for casual defininitions that assume good faith sometimes explicit clauses for the distracting edge cases need to be added.
It is fighting the hypothetical because you are not the only one providing hypotheticals. I am too; I’m providing a hypothetical where the player’s strategy makes this the least convenient possible world for people who claim that having such an Omega is a self-consistent concept. Saying “no, you can’t use that strategy” is fighting the hypothetical.
Moreover, the strategy “pick the opposite of what I predict Omega does” is a member of a class of strategies that have the same problem; it’s just an example of such a strategy that is particularly clear-cut, and the fact that it is clear-cut and blatantly demonstrates the problem with the scenario is the very aspect that leads you to call it trolling Omega. “You can’t troll Omega” becomes equivalent to “you can’t pick a strategy that makes the flaw in the scenario too obvious”.
If your goal is to show that Omega is “impossible” or “inconsistent”, then having Omega adopt the strategy “leave both boxes empty for people who try to predict me / do any other funny stuff” is a perfectly legitimate counterargument. It shows that Omega is in fact consistent if he adopts such strategy. You have no right to just ignore that counterargument.
Indeed, Omega requires a strategy for when he finds that you are too hard to predict. The only reason such a strategy is not provided beforehand in the default problem description is because we are not (in the context of developing decision theory) talking about situations where you are powerful enough to predict Omega, so such a specification would be redundant. The assumption, for the purpose of illuminating problems with classical decision theory, is that Omega has vastly more computational resources than you do, so that the difficult decision tree that presents the problem will obtain.
By the way, it is extremely normal for there to be strategies you are “incapable of executing”. For example, I am currently unable to execute the strategy “predict what you will say next, and counter it first”, because I can’t predict you. Computation is a resource like any other.
If your goal is to show that Omega is “impossible” or “inconsistent”, then having Omega adopt the strategy “leave both boxes empty for people who try to predict me / do any other funny stuff” is a perfectly legitimate counterargument.
If you are suggesting that Omega read my mind and think “does this human intend to outsmart me, Omega”, then sure he can do that. But that only takes care of the specific version of the strategy where the player has conscious intent.
If you’re suggesting “Omega figures out whether my strategy is functionally equivalent to trying to outsmart me”, you’re basically claiming that Omega can solve the halting problem by analyzing the situation to determine if it’s an instance of the halting problem, and outputting an appropriate answer if that is the case. That doesn’t work.
Indeed, Omega requires a strategy for when he finds that you are too hard to predict.
That still requires that he determine that I am too hard to predict, which either means solving the halting problem or running on a timer. Running on a timer is a legitimate answer, except again it means that there are some strategies I cannot execute.
The assumption, for the purpose of illuminating problems with classical decision theory, is that Omega has vastly more computational resources than you do, so that the difficult decision tree that presents the problem will obtain.
I thought the assumption is that I am a perfect reasoner and can execute any strategy.
except again it means that there are some strategies I cannot execute.
I don’t see how omega running his simulation on a timer makes any difference for this, but either way this is normal and expected. Problem resolved.
I thought the assumption is that I am a perfect reasoner and can execute any strategy.
Not at all. Though it may be convenient to postulate arbitrarily large computing power (as long as Omega’s power is increased to match) so that we can consider brute force algorithms instead of having to also worry about how to make it efficient.
(Actually, if you look at the decision tree for Newcomb’s, the intended options for your strategy are clearly supposed to be “unconditionally one-box” and “unconditionally two-box”, with potentially a mixed strategy allowed. Which is why you are provided wth no information whatsoever that would allow you to predict omega. And indeed the decision tree explicitly states that your state of knowledge is identical whether omega fills or doesn’t fill the box.)
I don’t see how omega running his simulation on a timer makes any difference for this,
It’s me who has to run on a timer. If I am only permitted to execute 1000 instructions to decide what my answer is, I may not be able to simulate Omega.
Though it may be convenient to postulate arbitrarily large computing power
Yes, I am assuming that I am capable of executing arbitrarily many instructions when computing my strategy.
the intended options for your strategy are clearly supposed to be “unconditionally one-box” and “unconditionally two-box”, with potentially a mixed strategy allowed. Which is why you are provided wth no information whatsoever that would allow you to predict omega
I know what problem Omega is trying to solve. If I am a perfect reasoner, and I know that Omega is, I should be able to predict Omega without actually having knowledge of Omega’s internals.
Actually, if you look at the decision tree for Newcomb’s, the intended options for your strategy are clearly supposed to be “unconditionally one-box” and “unconditionally two-box”,
Deciding which branch of the decision tree to pick is something I do using a process that has, as a step, simulating Omega. It is tempting to say “it doesn’t matter what process you use to choose a branch of the decision tree, each branch has a value that can be compared independently of why you chose the branch”, but that’s not correct. In the original problem, if I just compare the branches without considering Omega’s predictions, I should always two-box. If I consider Omega’s predictions, that cuts off some branches in a way which changes the relative ranking of the choices. If I consider my predictions of Omega’s predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.
Yes, I am assuming that I am capable of executing arbitrarily many instructions when computing my strategy.
But apparently you want to ignore the part when I said Omega has to have his own computing power increased to match. The fact that Omega is vastly more intelligent and computationally powerful than you is a fundamental premise of the problem. This is what stops you from magically “predicting him”.
Look, in Newcomb’s problem you are not supposed to be a “perfect reasoner” with infinite computing time or whatever. You are just a human. Omega is the superintelligence. So, any argument you make that is premised on being a perfect reasoner is automatically irrelevant and inapplicable. Do you have a point that is not based on this misunderstanding of the thought experiment? What is your point, even?
But apparently you want to ignore the part when I said Omega has to have his own computing power increased to match.
It’s already arbitrary large. You want that expanded to match arbitrarily large?
Look, in Newcomb’s problem you are not supposed to be a “perfect reasoner”
Asking “which box should you pick” implies that you can follow a chain of reasoning which outputs an answer about which box to pick.
It sounds like your decision making strategy fails to produce a useful result.
My decision making strategy is “figure out what Omega did and do the opposite”. It only fails to produce a useful result if Omega fails to produce a useful result (perhaps by trying to predict me and not halting). And Omega goes first, so we never get to the point where I try my decision strategy and don’t halt.
(And if you’re going to respond with “then Omega knows in advance that your decision strategy doesn’t halt”, how’s he going to know that?)
Furthermore, there’s always the transparent boxes situation. Instead of explicitly simulating Omega, I implicitly simulate Omega by looking in the transparent boxes and determining what Omega’s choice was.
What is your point, even?
That Omega cannot be a perfect predictor because being one no matter what strategy the human uses would imply being able to solve the halting problem.
It’s already arbitrary large. You want that expanded to match arbitrarily large?
When I say “arbitrarily large” I do not mean infinite. You have some fixed computing power, X (which you can interpret as “memory size” or “number of computations you can do before the sun explodes the next day” or whatever). The premise of newcomb’s is that Omega has some fixed computing power Q * X, where Q is really really extremely large. You can increase X as much as you like, as long as Omega is still Q times smarter.
Asking “which box should you pick” implies that you can follow a chain of reasoning which outputs an answer about which box to pick.
Which does not even remotely imply being a perfect reasoner. An ordinary human is capable of doing this just fine.
My decision making strategy is “figure out what Omega did and do the opposite”. It only fails to produce a useful result if Omega fails to produce a useful result (perhaps by trying to predict me and not halting).
Two points: If Omega’s memory is Q times large than yours, you can’t fit a simulation of him in your head. So predicting by simulation is not going to work. Second, If Omega has Q times as much computing time as you, you can try to predict him (by any method) for X steps, at which point the sun explodes. Naturally, Omega simulates you for X steps, notices that you didn’t give a result before the sun explodes, so leaves both boxes empty and flies away to safety.
That Omega cannot be a perfect predictor because being one no matter what strategy the human uses would imply being able to solve the halting problem.
Only under the artificial irrelevant-to-the-thought-experiment conditions that require him to care whether you’ll one-box or two-box after standing in front of the boxes for millions of years thinking about it. Whether or not the sun explodes, or Omega himself imposes a time limit, a realistic Omega only simulates for X steps, then stops. No halting-problem-solving involved.
In other words, if “Omega isn’t a perfect predictor” means that he can’t simulate a physical system for an infinite number of steps in finite time then I agree but don’t give a shit. Such a thing is entirely unneccessary. In the thought experiment, if you are a human, you die of aging after less than 100 years. And any strategy that involves you thinking in front of the boxes until you die of aging (or starvation, for that matter) is clearly flawed anyway.
Furthermore, there’s always the transparent boxes situation. Instead of explicitly simulating Omega, I implicitly simulate Omega by looking in the transparent boxes and determining what Omega’s choice was.
This example is less stupid since it is not based on trying to circularly predict yourself. But in this case Omega just makes action-conditional predictions and fills the boxes however he likes.
If I consider my predictions of Omega’s predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.
It sounds like your decision making strategy fails to produce a useful result. That is unfortunate for anyone who happens to attempt to employ it. You might consider changing it to something that works.
“Ha! What if I don’t choose One box OR Two boxes! I can choose No Boxes out of indecision instead!” isn’t a particularly useful objection.
No, Nshepperd is right. Omega imposing computation limits on itself solves the problem (such as it is). You can waste as much time as you like. Omega is gone and so doesn’t care whether you pick any boxes before the end of time. This is a standard solution for considering cooperation between bounded rational agents with shared source code.
When attempting to achieve mutual cooperation (essentially what Newcomblike problems are all about) making yourself difficult to analyse only helps against terribly naive intelligences. ie. It’s a solved problem and essentially useless for all serious decision theory discussion about cooperation problems.
If your goal is to show that Omega is “impossible” or “inconsistent”, then having Omega adopt the strategy “leave both boxes empty for people who try to predict me / do any other funny stuff” is a perfectly legitimate counterargument. It shows that Omega is in fact consistent if he adopts such strategy. You have no right to just ignore that counterargument.
This contradicts the accuracy stated at the beginning. Omega can’t leave both boxes empty for people who try to adopt a mixed strategy AND also maintain his 99.whatever accuracy on one-boxers.
And even if Omega has way more computational than I do, I can still generate a random number. I can flip a coin thats 60⁄40 one-box, two-box. The most accurate Omega can be, then, is to assume I one box.
This contradicts the accuracy stated at the beginning. Omega can’t leave both boxes empty for people who try to adopt a mixed strategy AND also maintain his 99.whatever accuracy on one-boxers.
He can maintain his 99% accuracy on deterministic one-boxers, which is all that matters for the hypothetical.
Alternatively, if we want to explicitly include mixed strategies as an available option, the general answer is that Omega fills the box with probability = the probability that your mixed strategy one-boxes.
All of this is very true, and I agree with it wholeheartedly. However, I think Jiro’s second scenario is more interesting, because then predicting Omega is not needed; you can see what Omega’s prediction was just by looking in (the now transparent) Box B.
As I argued in this comment, however, the scenario as it currently is is not well-specified; we need some idea of what sort of rule Omega is using to fill the boxes based on his prediction. I have not yet come up with a rule that would allow Omega to be consistent in such a scenario, though, and I’m not sure if consistency in this situation would even be possible for Omega. Any comments?
As I argued in this comment, however, the scenario as it currently is is not well-specified; we need some idea of what sort of rule Omega is using to fill the boxes based on his prediction.
Previous discussions of Transparent Newcomb’s problem have been well specified. I seem to recall doing so in footnotes so as to avoid distraction.
I have not yet come up with a rule that would allow Omega to be consistent in such a scenario, though, and I’m not sure if consistency in this situation would even be possible for Omega. Any comments?
The problem (such as it is) is that there is ambiguity between the possible coherent specifications, not a complete lack. As your comment points out there are (merely) two possible situations for the player to be in and Omega is able to counter-factually predict the response to either of them, with said responses limited to a boolean. That’s not a lot of permutations. You could specify all 4 exhaustively if you are lazy.
IF (Two box when empty AND One box when full) THEN X IF …
Any difficulty here is in choosing the set of rewards that most usefully illustrate the interesting aspects of the problem.
Any difficulty here is in choosing the set of rewards that most usefully illustrate the interesting aspects of the problem.
I’d say that about hits the nail on the head. The permutations certainly are exhaustively specifiable. The problem is that I’m not sure how to specify some of the branches. Here’s all four possibilities (written in pseudo-code following your example):
IF (Two box when empty And Two box when full) THEN X
IF (One box when empty And One box when full) THEN X
IF (Two box when empty And One box when full) THEN X
IF (One box when empty And Two box when full) THEN X
The rewards for 1 and 2 seem obvious; I’m having trouble, however, imagining what the rewards for 3 and 4 should be. The original Newcomb’s Problem had a simple point to demonstrate, namely that logical connections should be respected along with causal connections. This point was made simple by the fact that there’s two choices, but only one situation. When discussing transparent Newcomb, though, it’s hard to see how this point maps to the latter two situations in a useful and/or interesting way.
When discussing transparent Newcomb, though, it’s hard to see how this point maps to the latter two situations in a useful and/or interesting way.
Option 3 is of the most interest to me when discussing the Transparent variant. Many otherwise adamant One Boxers will advocate (what is in effect) 3 when first encountering the question. Since I advocate strategy 2 there is a more interesting theoretical disagreement. ie. From my perspective I get to argue with (literally) less-wrong wrong people, with a correspondingly higher chance that I’m the one who is confused.
The difference between 2 and 3 becomes more obviously relevant when noise is introduced (eg. 99% accuracy Omega). I choose to take literally nothing in some situations. Some think that is crazy...
In the simplest formulation the payoff for three is undetermined. But not undetermined in the sense that Omega’s proposal is made incoherent. Arbitrary as in Omega can do whatever the heck it wants and still construct a coherent narrative. I’d personally call that an obviously worse decision but for simplicity prefer to define 3 as a defect (Big Box Empty outcome).
As for 4… A payoff of both boxes empty (or both boxes full but contaminated with anthrax spores) seems fitting. But simply leaving the large box empty is sufficient for decision theoretic purposes.
Out of interest, and because your other comments on the subject seem well informed, what do you choose when you encounter Transparent Newcomb and find the big box empty?
what do you choose when you encounter Transparent Newcomb and find the big box empty?
This is a question that I find confusing due to conflicting intuitions. Fortunately, since I endorse reflective consistency, I can replace that question with the following one, which is equivalent in my decision framework, and which I find significantly less confusing:
“What would you want to precommit to doing, if you encountered transparent Newcomb and found the big box (a.k.a. Box B) empty?”
My answer to this question would be dependent upon Omega’s rule for rewarding players. If Omega only fills Box B if the player employs the strategy outlined in 2, then I would want to precommit to unconditional one-boxing—and since I would want to precommit to doing so, I would in fact do so. If Omega is willing to reward the player by filling Box B even if the player employs the strategy outlined in 3, then I would see nothing wrong with two-boxing, since I would have wanted to precommit to that strategy in advance. Personally, I find the former scenario—the one where Omega only rewards people who employ strategy 2--to be more in line with the original Newcomb’s Problem, for some intuitive reason that I can’t quite articulate.
What’s interesting, though, is that some people two-box even upon hearing that Omega only rewards the strategy outlined in 2--upon hearing, in other words, that they are in the first scenario described in the above paragraph. I would imagining that their reasoning process goes something like this: “Omega has left Box B empty. Therefore he has predicted that I’m going to two-box. It is extremely unlikely a priori that Omega is wrong in his predictions, and besides, I stand to gain nothing from one-boxing now. Therefore, I should two-box, both because it nets me more money and because Omega predicted that I would do so.”
I disagree with this line of reasoning, however, because it is very similar to the line of reasoning that leads to self-fulfilling prophecies. As a rule, I don’t do things just because somebody said I would do them, even if that somebody has a reputation for being extremely accurate, because then that becomes the only reason it happened in the first place. As with most situations involving acausal reasoning, however, I can only place so much confidence in me being correct, as opposed to me being so confused I don’t even realize I’m wrong.
It would seem to me that Omega’s actions would be as follows:
IF (Two box when empty And Two box when full) THEN Empty
IF (One box when empty And One box when full) THEN Full
IF (Two box when empty And One box when full) THEN Empty or Full
IF (One box when empty And Two box when full) THEN Refuse to present boxes
Cases 1 and 2 are straightforward. Case 3 works for the problem, no matter which set of boxes Omega chooses to leave.
In order for Omega to maintain its high prediction accuracy, though, it is necessary—if Omega predicts that a given player will choose option 4 - that Omega simply refuse to present the transparent boxes to this player. Or, at least, that the number of players who follow the other three options should vastly outnumber the fourth-option players.
This is an interesting response because 4 is basically what Jiro was advocating earlier in the thread, and you’re basically suggesting that Omega wouldn’t even present the opportunity to people who would try to do that. Would you agree with this interpretation of your comment?
If we take the assumption, for the moment, that the people who would take option 4 form at least 10% of the population in general (this may be a little low), and we further take the idea that Omega has a track record of success in 99% or more of previous trials (as is often specified in Newcomb-like problems), then it is clear that whatever algorithm Omega is using to decide who to present the boxes to is biased, and biased heavily, against offering the boxes to such a person.
Consider:
P(P) = The probability that Omega will present the boxes to a given person.
P(M|P) = The probability that Omega will fill the boxes correctly (empty for a two-boxer, full for a one-boxer)
P(M’|P) = The probability that Omega will fail to fill the boxes correctly
P(O) = The probability that the person will choose option 4
P (M’|O) = 1 (from the definition of option 4)
therefore P(M|O) = 0
and if Omega is a perfect predictor, then P(M|O’) = 1 as well.
P (M|P) = 0.99 (from the statement of the problem)
P (O) = 0.1 (assumed)
Now, of all the people to whom boxes are presented, Omega is only getting at most one percent wrong; P(M’|P) ⇐ 0.01. Since P(M’|O) = 1, and P(M’|O’)=0, it follows that P(P|O) ⇐ 0.01.
If Omega is a less than perfect predictor, then P(M’|O’)>0, and P(P|O)<0.01.
And, since P(P|O) = 0.01 < P(O) = 0.1, I therefore conclude that Omega must have a bias—and a fairly strong one—against presenting the boxes to such perverse players.
I am too; I’m providing a hypothetical where the player’s strategy makes this the least convenient possible world for people who claim that having such an Omega is a self-consistent concept.
It may be the least convenient possible world. More specifically it is the minor inconvenience of being careful to specify the problem correctly so as not to be distracted. Nshepperd gives some of the reason typically used in such cases.
Moreover, the strategy “pick the opposite of what I predict Omega does” is a member of a class of strategies that have the same problem
What happens when you try to pick the the opposite of what you predict Omega does is something like what happens when you try to beat Deep Fritz 14 at chess while outrunning a sports car. You just fail. Your brain is a few of pounds of fat approximately optimised for out-competing other primates for mating opportunities. Omega is a super-intelligence. The assumption that Omega is smarter than the player isn’t an unreasonable one and is fundamental to the problem. Defying it is a particularly futile attempt to fight the hypothetical by basically ignoring it.
Generalising your proposed class to executing maximally inconvenient behaviours in response to, for example, the transparent Newcomb’s problem is where it gets actually gets (tangentially) interesting. In that case you can be inconvenient without out-predicting the superintelligence and so the transparent Newcomb’s problem requires more care with the if clause.
In the first scenario, I doubt you would be able to predict Omega with sufficient accuracy to be able to do what you’re suggesting. Transparent boxes, though, are interesting. The problem is, the original Newcomb’s Problem had a single situation with two possible choices involved; tranparent Newcomb, however, involves two situations:
Transparent Box B contains $1000000.
Transparent Box B contains nothing.
It’s unclear from this what Omega is even trying to predict; is he predicting your response to the first situation? The second one? Both? Is he following the rule: “If the player two-boxes in either situation, fill Box B with nothing”? Is he following the rule: “If the player one-boxes in either situation, fill Box B with $1000000″? The problem isn’t well-specified; you’ll have to give a better description of the situation before a response can be given.
In the first scenario, I doubt you would be able to predict Omega with sufficient accuracy to be able to do what you’re suggesting.
That falls under 1) there are some strategies I am incapable of executing.
tranparent Newcomb, however, involves two situations:
Transparent Box B contains $1000000.
Transparent Box B contains nothing.
The transparent scenario is just a restatement of the opaque scenario with transparent boxes instead of “I predict what Omega does”. If you think the transparent scenario involves two situations, then the opaque scenario involves two situations as well. (1=opaque box B contains $1000000, and I predict that Omega put in $1000000 and 2=opaque box B contains nothing, and I predict that Omega puts in nothing.) If you object that we have no reason to think both of those opaque situations are possible, I can make a similar objection to the transparent situations.
What I’ve done for Newcomb problems is that I’ve precommitted to one boxing, but then I’ve paid a friend to follow me at all times. Just before I chose the boxes, he is to perform complicated neurosurgery to turn me into a two boxer. That way I maximize my gain.
Better wipe my memory of getting my friend to follow me then.
Also, I have built a second Omega, and given it to others. They are instructed to two box if 2 Omega predicts 1 Omega thinks they’ll one box, and visa versa.
Um, not my decision again. It was predetermined whether I would look both ways or not.
So the next time you cross the street, are you going to look both ways or not? You can’t calculate the physical consequences of every particle interaction taking place in your brain, so taking the route the universe takes, i.e. just let every play out at the lowest level, is not an option for you and your limited processing power. And yet, for some reason, I suspect you’ll probably answer that you will look both ways, despite being unable to actually predict your brain-state at the time of crossing the street. So if you can’t actually predict your decisions perfectly as dictated by physics… how do you know that you’ll actually look both ways next time you cross the street?
The answer is simple: you don’t know for certain. But you know that, all things being equal, you prefer not getting hit by a car to getting hit by a car. And looking both ways helps to lower the probability of getting hit by a car. Therefore, given knowledge of your preferences and your decision algorithm, you will choose to look both ways.
Note that nowhere in the above explanation was determinism violated! Every step of the physics plays out as it should… and yet we observe that your choice still exists here! Determinism explains free will, not explains it away; just because everything is determined doesn’t mean your choice doesn’t exist! You still have to choose; if I ask you if you were forced to reply to my comment earlier by the Absolute Power of Determinism, or if you chose to write that comment of your own accord, I suspect you’ll answer the latter.
Likewise, Omega may have predicted your decision, but that decision still falls to you to make. Just because Omega predicted what you would do doesn’t mean you can get away with not choosing, or choosing sub-optimally. If I said, “I predict that tomorrow Lumifer will jump off a cliff,” would you do it? Of course not. Conversely, if I said, “I predict that tomorrow Lumifer will not jump off a cliff,” would you do it? Still of course not. Your choice exists regardless of whether there’s some agent out there predicting what you do.
Therefore, given knowledge of your preferences and your decision algorithm, you will choose to look both ways.
Well, actually, it depends. Descending from flights of imagination down to earth, I sometimes look and sometimes don’t. How do I know there isn’t a car coming? In some cases hearing is enough. It depends.
nowhere in the above explanation was determinism violated!
You are mistaken. If my actions are predetermined, I chose nothing. You may prefer to use the word “choice” within determinism, I prefer not to.
just because everything is determined doesn’t mean your choice doesn’t exist
Yes, it does mean that. And, I’m afraid, just you asserting something—even with force—doesn’t make it automatically true.
Your choice exists regardless of whether there’s some agent out there predicting what you do.
Of course, but that’s not we are talking about. We are talking about whether choice exists at all.
just because everything is determined doesn’t mean your choice doesn’t exist
Yes, it does mean that.
Okay, it seems like we’re just arguing definitions now. Taboo “choice” and any synonyms. Now that we have done that, I’m going to specify what I mean when I use the word “choice”: the deterministic output of your decision algorithm over your preferences given a certain situation. If there is something in this definition that you feel does not capture the essence of “choice” as it relates to Newcomb’s Problem, please point out exactly where you think this occurs, as well as why it is relevant in the context of Newcomb’s Problem. In the meantime, I’m going to proceed with this definition.
So, in the above quote of mine, replacing “choice” with my definition gives you:
just because everything is determined doesn’t mean the deterministic output of your decision algorithm over your preferences given a certain situation doesn’t exist
We see that the above quote is trivially true, and I assert that “the deterministic output of your decision algorithm over your preferences given a certain situation” is what matters in Newcomb’s Problem. If you have any disagreements, again, I would ask that you outline exactly what those disagreements are, as opposed to providing qualitative objections that sound pithy but don’t really move the discussion forward. Thank you in advance for your time and understanding.
I’m going to specify what I mean when I use the word “choice”: the deterministic output of your decision algorithm over your preferences given a certain situation.
Sure, you can define the word “choice” that way. The problem is, I don’t have that. I do not have a decision algorithm over my preferences that produces some deterministic output given a certain situation. Such a thing does not exist.
You may define some agent for whom your definition of “choice” would be valid. But that’s not me, and not any human I’m familiar with.
The problem is, I don’t have that. I do not have a decision algorithm over my preferences that produces some deterministic output given a certain situation. Such a thing does not exist.
What is your basis for arguing that it does not exist?
You may define some agent for whom your definition of “choice” would be valid. But that’s not me, and not any human I’m familiar with.
What makes humans so special as to exempted from this?
Keep in mind that my goal here is not perpetuate disagreement or to scold you for being stupid; it’s to resolve whatever differences in reasoning are causing our disagreement. Thus far, your comments have been annoyingly evasive and don’t really help me understand your position better, which has caused me to update toward you not actually having a coherent position on this. Presumably, you think you do have a coherent position, in which case I’d be much gratified if you’d just lay out everything that leads up to your position in one fell swoop rather than forcing myself and others to ask questions repeatedly in hope of clarification. Thank you.
I think it became clear that this debate is pointless the moment proving determinism became a prerequisite for getting anywhere.
I did try a different approach, but that was mostly dodged. I suspect Lumifer wants determinism to be a prerequisite; the freedom to do that slippery debate dance of theirs is so much greater then.
What is your basis for arguing that it does not exist?
Introspection.
What’s your basis for arguing that it does exist?
What makes humans so special as to exempted from this?
Tsk, tsk. Such naked privileging of an assertion.
to resolve whatever differences in reasoning are causing our disagreement.
Well, the differences are pretty clear. In simple terms, I think humans have free will and you think they don’t. It’s quite an old debate, at least a couple of millennia old and maybe more.
I am not quite sure why do you have difficulties accepting that some people think free will exists. It’s not a that unusual position to hold.
No offense, but this is a textbook example of an answer that sounds pithy but tells me, in a word, nothing. What exactly am I supposed to get out of this? How am I supposed to argue against this? This is a one-word answer that acts as a blackbox, preventing anyone from actually getting anything worthwhile out of it—just like “emergence”. I have asked you several times now to lay out exactly what your disagreement is. Unless you and I have wildly varying definitions of the word “exactly”, you have repeatedly failed to do so. You have displayed no desire to actually elucidate your position to the point where it would actually be arguable. I would characterize your replies to my requests so far as a near-perfect example of logical rudeness. My probability estimate of you actually wanting to go somewhere with this conversation is getting lower and lower...
Tsk, tsk. Such naked privileging of an assertion.
This is a thinly veiled expression of contempt that again asserts nothing. The flippancy this sort of remark exhibits suggests to me that you are more interested in winning than in truth-seeking. If you think I am characterizing your attitude uncharitably, please feel free to correct me on this point.
In simple terms, I think humans have free will and you think they don’t.
Taboo “free will” and try to rephrase your argument without ever using that phrase or any synonymous terms/phrases. (An exception would be if you were trying to refer directly to the phrase, in which case you would put it in quotation marks, e.g. “free will”.) Now then, what were you saying?
You are supposed to get out of this that you’re asking me to prove a negative and I don’t see a way to do this other than say “I’ve looked and found nothing” (aka introspection). How do you expect me to prove that I do NOT have a deterministic algorithm running my mind?
How am I supposed to argue against this?
You are not supposed to argue against this. You are supposed to say “Aha, so this a point where we disagree and there doesn’t appear to be a way to prove it one way or another”.
you have repeatedly failed to do so.
From my point of view you repeatedly refused to understand what I’ve been saying. You spent all your time telling me, but not listening.
This is a thinly veiled expression of contempt that again asserts nothing.
Oh, it does. It asserts that you are treating determinism as a natural and default answer and the burden is upon me to prove it wrong. I disagree.
Taboo “free will” and try to rephrase your argument without ever using that phrase or any synonymous terms/phrases.
Why? This is the core of my position. If you think I’m confused by words, tell me how am I confused. It the problem that you don’t understand me? I doubt this.
Are you talking about libertarian free will? The uncaused causer? I would have hoped that LWers wouldn’t believe such absurd things. Perhaps this isn’t the right place for you if you still reject reductionism.
If you’re just going to provide every banal objection that you can without evidence or explanation in order to block discussion from moving forward, you might as well just stop posting.
I can make a choice only after step 1, once the boxes are set up and unchangeable.
It’s common to believe that we have the power to “change” the future but not the past. Popular conceptions of time travel such as Back To The Future show future events wavering in and out of existence as people deliberate about important decisions, to the extent of having a polaroid from the future literally change before our eyes.
All of this, of course, is a nonsense in deterministic physics. If any part of the universe is “already” determined, it all is (and by the way quantum “uncertainty” doesn’t change this picture in any interesting way). So there is not much difference between controlling the past and controlling the future, except that we don’t normally get an opportunity to control the past, due to the usual causal structure of the universe.
In other words, the boxes are “already set up and unchangeable” even if you decide before being scanned by Omega. But you still get to decide whether they are unchangeable in a favourable or unfavourable way.
I am aware of the LW (well, EY’s, I guess) position on free will. But here we are discussing the Newcomb’s Problem. We can leave free will to another time. Still, what about my question?
If Omega is a good predictor, he’ll predict my decision, but there is nothing I can do about it. I don’t make a choice to be a “two-boxer” or a “one-boxer”.
Well, if that’s true—that is, if whether you are the sort of person who one-boxes or two-boxes in Newcomblike problems is a fixed property of Lumifer that you can’t influence in any way—then you’re right that there’s no point to thinking about which choice is best with various different predictors. After all, you can’t make a choice about it, so what difference does it make which choice would be better if you could?
Similarly, in most cases, given a choice between accelerating to the ground at 1G and doing so at .01 G once I’ve fallen off a cliff, I would do better to choose the latter… but once I fall off the cliff, I don’t actually have a choice, so that doesn’t matter at all.
Many people who consider it useful to think about Newcomblike problems, by contrast, believe that there is something they can do about it… that they do indeed make a choice to be a “two-boxer” or a “one-boxer.”
whether you are the sort of person who one-boxes or two-boxes in Newcomblike problems is a fixed property of Lumifer that you can’t influence in any way
It’s not a fixed property, it’s undetermined. Go ask a random person whether he one-boxes or two-boxes :-)
Earlier, you seemed to be saying that you’re incapable of making such a choice. Now, you seem to be saying that you don’t find it useful to do so, which seems to suggest… though not assert… that you can.
So, just to clarify: on your view, are you capable of precommitting to one-box or two-box? And if so, what do you mean when you say that you can’t make a choice to be a “one-boxer”—how is that different from precommitting to one-box?
I, personally, have heard of the Newcomb’s Problem so one can argue that I am capable of pre-committing. However a tiny minority of the world’s population have heard of that problem and, as far as I know, the default formulation of the Newcomb’s Problem assumes that the subject had no advance warning. Therefore in general case there is no pre-committment and the choice does not exist.
on your view, are you capable of precommitting to one-box or two-box?
You have answered that “one can argue that” you are capable of it. Which, well, OK, that’s probably true. One could also argue that you aren’t, I imagine.
So… on your view, are you capable of precommitting?
Because earlier you seemed to be saying that you weren’t able to. I think you’re now saying that you can (but that other people can’t). But it’s very hard to tell.
I can’t tell whether you’re just being slippery as a rhetorical strategy, or whether I’ve actually misunderstood you.
That aside: it’s not actually clear to me that precommitting to oneboxing is necessary. The predictor doesn’t require me to precommit to oneboxing, merely to have some set of properties that results in me oneboxing. Precommitment is a simple example of such a property, but hardly the only possible one.
See, that’s where I disagree. If you choose to one-box, even if that choice is made on a whim right before you’re required to select a box/boxes, Omega can predict that choice with accuracy. This isn’t backward causation; it’s simply what happens when you have a very good predictor. The problem with causal decision theory is that it neglects these sorts of acausal logical connections, instead electing to only keep track of casual connections. If Omega can predict you with high-enough accuracy, he can predict choices that you would make given certain information. If you take a random passerby and present them with a formulation of Newcomb’s Problem, Omega can analyze that passerby’s disposition and predict in advance how that passerby’s disposition will affect his/her reaction to that particular formulation of Newcomb’s problem, including whether he/she will two-box or one-box. Conscious precommitment is not required; the only requirement is that you make a choice. If you or any other person chooses to one-box, regardless of whether they’ve previously heard of Newcomb’s Problem or made a precommitment, Omega will predict that decision with whatever accuracy we specify. Then the only questions are “How high of an accuracy do we need?”, followed by “Can humans reach this desired level of accuracy?” And while I’m hesitant to provide an absolute threshold for the first question, I do not hesitate at all to answer the second question with, “Yes, absolutely.” Thus we see that Newcomb-like situations can and do pop up in real life, with merely human predictors.
If there are any particulars you disagree with in the above explanation, please let me know.
If Omega can predict you with high-enough accuracy, he can predict choices that you would make given certain information.
Sure, I agree, Omega can do that.
However when I get to move, when I have the opportunity to make a choice, Omega is already done with his prediction. Regardless of what his prediction was, the optimal choice for me after Stage 1 is to two-box.
My choice cannot change what’s in the boxes—only Omega can determine what’s in the boxes and I have no choice with respect to his prediction.
Well, if you reason that way, you will end up two-boxing. And, of course, Omega will know that you will end up two-boxing. Therefore, he will put nothing in Box B. If, on the other hand, you had chosen to one-box instead, Omega would have known that, too. And he would have put $1000000 in Box B. If you say, “Oh, the contents of the boxes are already fixed, so I’m gonna two-box!”, there is not going to be anything in Box B. It doesn’t matter what reasoning you use to justify two-boxing, or how elaborate your argument is; if you end up two-boxing, you are going to get $1000 with probability (Omega’s-predictive-power)%. Sure, you can say, “The boxes are already filled,” but guess what? If you do that, you’re not going to get any money. (Well, I mean, you’ll get $1000, but you could have gotten $1000000.) Remember, the goal of a rationalist is to win. If you want to win, you will one-box. Period.
You chose to two-box in this hypothetical Newcomb’s Problem when you said earlier in this thread that you would two-box. Fortunately, since this is a hypothetical, you don’t actually gain or lose any utility from answering as you did, but had this been a real-life Newcomb-like situation, you would have. If (I’m actually tempted to say “when”, but that discussion can be held another time) you ever encounter a real-life Newcomb-like situation, I strongly recommend you one-box (or whatever the equivalent of one-boxing is in that situation).
I don’t believe real-life Newcomb situations exist or will exist in my future.
I also think that the local usage of “Newcomb-like” is misleading in that it is used to refer to situations which don’t have much to do with the classic Newcomb’s Problem.
I strongly recommend you one-box
You recommendation was considered and rejected :-)
I don’t believe real-life Newcomb situations exist or will exist in my future.
It is my understanding that Newcomb-like situations arise whenever you deal with agents who possess predictive capabilities greater than chance. It appears, however, that you do not agree with this statement. If it’s not too inconvenient, could you explain why?
when I get to move, when I have the opportunity to make a choice, Omega is already done with his prediction.
You have elsewhere agreed that you (though not everyone) have the ability to make choices that affect Omega’s prediction (including, but not limited to, the choice of whether or not to precommit to one-boxing).
That seems incompatible with your claim that all of your relevant choices are made after Omega’s prediction.
Have you changed your mind? Have I misunderstood you? Are you making inconsistent claims in different branches of this conversation? Do you not see an inconsistency? Other?
Ah. OK. And just to be clear: you believe that advance warning is necessary in order to decide whether to one-box or two-box… it simply isn’t possible, in the absence of advance warning, to make that choice; rather, in the absence of advance warning humans deterministically two-box. Have I understood that correctly?
it simply isn’t possible, in the absence of advance warning, to make that choice
Correct.
in the absence of advance warning humans deterministically two-box
Nope. I think two-boxing is the right thing to do but humans are not deterministic, they can (and do) all kinds of stuff. If you run an empirical test I think it’s very likely that some people will two-box and some people will one-box.
Gotcha: they don’t have a choice in which they do, on your account, but they might do one or the other. Correction accepted.
Incidentally, for the folks downvoting Lumifer here, I’m curious as to your reasons. I’ve found many of their earlier comments annoyingly evasive, but now they’re actually answering questions clearly. I disagree with those answers, but that’s another question altogether.
In which way am I not accountable? I am here, answering questions, not deleting my posts.
Sure, I often prefer to point to something rather than plop down a full specification. I am also rather fond of irony and sarcasm. But that’s not exactly the same thing as avoiding accountability, is it?
If you want highly specific answers, ask highly specific questions. If you feel there is ambiguity in the subject, resolve it in the question.
I don’t make a choice to be a “two-boxer” or a “one-boxer”.
If you said earlier in this thread that you would two-box, you are a two-boxer. If you said earlier in this thread that you would one-box, you are a one-boxer. If Omega correctly predicts your status as a one-boxer/two-boxer, he will fill Box B with the appropriate amount. Assuming that Omega is a good predictor, his prediction is contingent on your disposition as a one-boxer or a two-boxer. This means you can influence Omega’s prediction (and thus the contents of the boxes) simply by choosing to be a one-boxer. If Omega is a good-enough predictor, he will even be able to predict future changes in your state of mind. Therefore, the decision to one-box can and will affect Omega’s prediction, even if said decision is made AFTER Omega’s prediction.
This is the essence of being a reflectively consistent agent, as opposed to a reflectively inconsistent agent. For an example of an agent that is reflectively inconsistent, see causal decision theory. Let me know if you still have any qualms with this explanation.
If you said earlier in this thread that you would two-box, you are a two-boxer. If you said earlier in this thread that you would one-box, you are a one-boxer.
Oh, I can’t change my mind? I do that on regular basis, you know...
This means you can influence Omega’s prediction (and thus the contents of the boxes) simply by choosing to be a one-boxer.
This implies that I am aware that I’ll face the Newcomb’s problem.
Let’s do the Newcomb’s Problem with a random passer-by picked from the street—he has no idea what’s going to happen to him and has never heard of Omega or the Newcomb’s problem before. Omega has to make a prediction and fill the boxes before that passer-by gets any hint that something is going to happen.
So, Step 1 happens, the boxes are set up, and our passer-by is explained the whole game. What should he do? He never chose to be a one-boxer or a two-boxer because he had no idea such things existed. He can only make a choice now and the boxes are done and immutable. Why should he one-box?
Oh, I can’t change my mind? I do that on regular basis, you know...
It seems unlikely to me that you would change your mind about being a one-boxer/two-boxer over the course of a single thread. Nevertheless, if you did so, I apologize for making presuppositions.
So, Step 1 happens, the boxes are set up, and our passer-by is explained the whole game. What should he do? He never chose to be a one-boxer or a two-boxer because he had no idea such things existed. He can only make a choice now and the boxes are done and immutable. Why should he one-box?
If Omega is a good-enough predictor, he will even be able to predict future changes in your state of mind. Therefore, the decision to one-box can and will affect Omega’s prediction, even if said decision is made AFTER Omega’s prediction.
If our hypothetical passerby chooses to one-box, then to Omega, he is a one-boxer. If he chooses to two-box, then to Omega, he is a two-boxer. There’s no “not choosing”, because if you make a choice about what to do, you are choosing.
The only problem is that you have causality going back in time. At the time of Omega’s decision the passer-by’s state with respect to one- or two-boxing is null, undetermined, does not exist. Omega can scan his brain or whatever and make his prediction, but the passer-by is not bound by that prediction and has not (yet) made any decisions.
The first chance our passer-by gets to make a decision is after the boxes are fixed. His decision (as opposed to his personality, preferences, goals, etc.) cannot affect Omega’s prediction because causality can’t go backwards in time. So at this point, after step 2, the only time he can make a decision, he should two-box.
As far as I’m aware, what you’re saying is basically the same thing as what causal decision theory says. I hate to pass the buck, but So8res has written a very good post on this already; anything I could say here has already been said by him, and better. If you’ve read it already, then I apologize; if not, I’d say give it a skim and see what you think of it.
As far as I’m aware, what you’re saying is basically the same thing as what causal decision theory says.
So8res’ post points out that
CDT is the academic standard decision theory. Economics, statistics, and philosophy all assume (or, indeed, define) that rational reasoners use causal decision theory to choose between available actions.
If Omega is omniscient the two actions (one- and two-boxing) each have a certain outcome with the probability of 1. So you just pick the better outcome. If Omega is just a skilled predictor, there is no certain outcome so you two-box.
You are facing a modified version of Newcomb’s Problem, which is identical to standard Newcomb except that Omega now has 99% predictive accuracy instead of ~100%. Do you one-box or two-box?
Two-box. From my point of view it’s all or nothing (and it has to be not ~100%, but exactly 100%).
You get $1000 with 99% probability and $1001000 with 1% probability, for a final expected value of $101090. A one-boxer gets $1000000 with 99% probability and $0 with 1% probability, with a final expected value of $990000. Even with probabilistic uncertainties, you would still have been comparatively better off one-boxing. And this isn’t just limited to high probabilities; theoretically any predictive power better than chance causes Newcomb-like situations.
In practice, this tends to go away with lower predictive accuracies because the relative rewards aren’t high enough to justify one-boxing. Nevertheless, I have little to no trouble believing that a skilled human predictor can reach accuracies of >80%, in which case these Newcomb-like tendencies are indeed present.
No, I don’t think so.
Let’s do things in temporal order.
Step 1: Omega makes a prediction and puts money into boxes.
What’s the prediction and what’s in the boxes?
Assuming you are a two-boxer, there is a 99% chance that there is nothing in Box B (and $1000 in Box A, as always), along with a 1% chance that Box B contains $1000000. If we’re going with the most likely scenario, there is nothing in Box B.
In the classic Newcomb’s Problem Omega moves first before I can do anything. Step 1 happens before I made any choices.
If Omega is a good predictor, he’ll predict my decision, but there is nothing I can do about it. I don’t make a choice to be a “two-boxer” or a “one-boxer”.
I can make a choice only after step 1, once the boxes are set up and unchangeable. And after step 1 everything is fixed so you should two-box.
This is true both for the 99% and 100% accurate predictor, isn’t it? Yet you say you one-box with the 100% one.
Please answer me this:
What does 99% accuracy mean to you exactly, in this scenario? If you know that Omega can predict you with 99% accuracy, what reality does this correspond to for you? What do you expect to happen different, compared to if he could predict you with, say, 50% accuracy (purely chance guesses)?
Actually, let’s make it more specific: suppose you do this same problem 1000 times, with a 99% Omega, what amount of money do you expect to end up with if you two-box? And what if you one-box?
The reason I am asking is that it appears to me like, the moment Omega stops being perfectly 100% accurate, you really stop believing he can predict you at all. It’s like, if you’re given a Newcomblike problem that involves “Omega can predict you with 99% accuracy”, you don’t actually accept this information (and are therefore solving a different problem).
It’s unsafe to guess at another’s thoughts, and I could be wrong. But I simply fail to see, based on the things you’ve said, how the “99% accuracy” information informs your model of the situation at all.
Yes, because 100% is achievable only through magic. Omniscience makes Omega a god and you can’t trick an omniscient god.
That’s why there is a discontinuity between P=1 and P=1-e—we leave the normal world and enter the realm of magic.
In the frequentist framework this means that if you were to fork the universe and make 100 exact copies of it, in 99 copies Omega would be correct and in one of them he would be wrong.
In the Bayesian framework probabilities are degrees of belief and the local convention is to think of them as betting odds, so this means I should be indifferent which side to take of a 1 to 99 bet on the correctness of Omega’s decision.
The question is badly phrased because it ignores the temporal order and so causality.
If you become omniscient for a moment and pick 1000 people who are guaranteed to two-box and 1000 people who are guaranteed to one-box, the one-box people will, of course, get more money from a 99% Omega. But it’s not a matter of their choice, you picked them this way.
Not at all. I mentioned this before and I’ll repeat it again: there is no link between Omega’s prediction and the choice of a standard participant in the Newcomb’s Problem. The standard participant does not have any advance information about Omega with his boxes and so cannot pre-commit to anything. He only gets to do something after the boxes become immutable.
At the core, I think, the issue is of causality and I’m not comfortable with the acausal manoeuvres that LW is so fond of.
I asked what it means to you. Not sure why I got an explanation of bayesian vs frequentist probability.
You seem to believe precommitment is the only thing that makes your choice knowable to Omega in advance. But Omega got his track record of 99% accurate predictions somehow. Whatever algorithms are ultimately responsible for your choice, they—or rather their causal ancestors—exist in the world observed by Omega at the time he’s filling his boxes. Unless you believe in some kind of acausal choicemaking, you are just as “committed” if you’d never heard of Newcomb’s problem. However, from within the algorithms, you may not know what choice you’re bound to make until you’re done computing. Just as a deterministic chess playing program is still choosing a move, even if the choice in a given position is bound to be, say, Nf4-e6.
Indeed, your willingness (or lack thereof) to believe that, whatever the output of your thinking, Omega is 99% likely to have predicted it, is probably going to be a factor in Omega’s original decision.
To me personally? Pretty much nothing, an abstract exercise with numbers. As I said before (though the post was heavily downvoted and probably invisible by now), I don’t expect to meet Omega and his boxes in my future, so I don’t care much, certainly not enough to pre-commit.
Or are you asking what 1% probability means to me? I suspect I have a pretty conventional perception of it.
No, that’s not the issue. We are repeating the whole line of argument I went through with dxu and TheOtherDave during the last couple of days—see e.g. this and browse up and down this subthread. Keep in mind that some of my posts there were downvoted into invisibility so you may need to click on buttons to open parts of the subthread.
Sigh. I wasn’t asking if you care. I meant more something like this:
Feynman doesn’t believe the number, but this is what it means to him: if he were to take the number seriously, this is the reality he thinks it would correspond to. That’s what I meant when I asked “what does this number mean to you”. What reality the “99% accuracy” (hypothetically) translates to for you when you consider the problem. What work it’s doing in your model of it, never mind if it’s a toy model.
Suppose you—or if you prefer, any not-precommitted participant—faces Omega, who presents his already-filled boxes, and the participant chooses to either one-box or two-box. Does the 99% accuracy mean you expect to afterwards find that Omega predicted that choice in 99 out of 100 cases on average? If so, can you draw up expected values for either choice? If not, how else do you understand that number?
OK, I re-read it and I think I see it.
I think the issue lies in this “after” word. If determinism, then you don’t get to first have a knowable-to-Omega disposition to either one-box or two-box, and then magically make an independent choice after Omega fills the boxes. The choice was already unavoidably part of the Universe before Stage 1, in the form of its causal ancestors, which are evidence for Omega to pick up to make his 99% accurate prediction. (So the choice affected Omega just fine, which is why I am not very fond of the word “acausal”). The standard intuition that places decisionmaking in some sort of a causal void removed from the rest of the Universe doesn’t work too well when predictability is involved.
Yep, that’s another way to look at causality issue. I asked upthread if the correctness of the one-boxing “solution” implies lack of free will and, in fact, depends on the lack of free will. I did not get a satisfying answer (instead I got condescending nonsense about my corrupt variation of an ideal CDT agent).
If “the choice was already unavoidably part of the Universe before Stage” then it is not a “choice” as I understand the word. In this case the whole problem disappears since if the choice to one-box or two-box is predetermined, what are we talking about, anyway?
As is often the case, Christianity already had to deal with this philosophical issue of a predetermined choice—see e.g. Calvinism.
Still wouldn’t mind getting a proper answer to my question...
And well, yeah, if you believe in a nondeterministic, acausal free will, then we may have an unbridgable disagreement. But even then… suppose we put the issue of determinism and free will completely aside for now. Blackbox it.
Imagine—put on your “take it seriously” glasses for a moment if I can have your indulgence—that a sufficiently advanced alien actually comes to Earth and in many, many trials establishes a 99% track record of predicting people’s n-boxing choices (to keep it simple, it’s 99% for one-boxers and also 99% for two-boxers).
Imagine also that, for whatever reason, you didn’t precommit (maybe sufficiently reliable precommitment mechanisms are costly and inconvenient, inflation ate into the value of the prize, and the chance of being chosen by Omega for participation is tiny. Or just akrasia, I don’t care). And then you get chosen for participation and accept (hey, free money).
What now? Do you have a 99% expectation that, after your choice, Omega will have predicted it correctly? Does that let you calculate expected values? If so, what are they? If not, in what way are you different from the historical participants who amounted to the 99% track record Omega’s built so far (= participants already known to have found themselves predicted 99% of the time)?
Or are you saying that an Omega like that can’t exist in the first place. In which case how is that different—other than in degree—from whenever humans predict other humans with better than chance accuracy?
But let me ask that question again, then. Does the correctness of one-boxing require determinism, aka lack of free will?
Let’s get a bit more precise here. There are two ways you can use this term. One is with respect to the future, to express the probability of something that hasn’t happened yet. The other is with respect to lack of knowledge, to express that something already happened, but you just don’t know what it is.
The meanings conveyed by these two ways are very different. In particular, when looking at Omega’s two boxes, there is no “expected value” in the first sense. Whatever happened already happened. The true state of nature is that one distribution of money between the boxes has the probability of 1 -- it happened—and the other distribution has the probability of 0 -- it did not happen. I don’t know which one of them happened, so people talk about expected values in the sense of uncertainty of their beliefs, but that’s quite a different thing.
So after Stage 1 in reality there are no expected values of the content of the boxes—the boxes are already set and immutable. It’s only my knowledge that’s lacking. And in this particular setting it so happens that I can make my knowledge not matter at all—by taking both boxes.
You approach also seems to have the following problem. Essentially, Omega views all people as divided into two classes: one-boxers and two-boxers. If belonging to such class is unchangeable (see predestination), the problem disappears since you can do nothing about it. However if you can change which class you belong to (e.g. before the game starts), you can change it after Stage 1 as well. So the optimal solution looks to be to get yourself into the one-boxing class before the game, but the, once Stage 1 happens, switch to the two-boxing class. And if you can’t pull off this trick, well, why do you think you can change classes at all?
I don’t think so, which is the gist of my last post—I think all it requires is taking Omega’s track record seriously. I suppose this means I prefer EDT to CDT—it seems insane to me to ignore evidence, past performance showing that 99% of everyone who’s two-boxed so far got out with much less money.
No more than a typical coin is either a header or a tailer. Omega can simply predict with high accuracy if it’s gonna be heads or tails on the next, specific occasion… or if it’s gonna be one or two boxes, already accounting for any tricks. Imagine you have a tell, like in poker, at least when facing someone as observant as Omega.
All right, I’m done here. Trying to get a direct answer to my question stopped feeling worthwhile.
The fact that you think these things are the same thing is the problem. Determinism does not imply lack of “choice”, not in any sense that matters.
To be absolutely clear:
No, one-boxing does not require lack of free will.
But it should also be obvious that for omega to predict you requires you to be predictable. Determinism provides this for the 100% accurate case. This is not any kind of contradiction.
No “changing” is required. You can’t “change” the future any more than you can “change” the past. You simply determine it. Whichever choice you decide to make is the choice you were always going to make, and determines the class you are, and always were in.
Yes, I understand that. I call that lack of choice and absence of free will. Your terminology may differ.
Just so I’m clear: when you call that a lack of choice, do you mean to distinguish it from anything? That is, is there anything in the world you would call the presence of choice? Does the word “choice,” for you, have a real referent?
Sure. I walk into a ice cream parlour, which flavour am I going to choose? Can you predict? Can anyone predict with complete certainty? If not, I’ll make a choice.
This definition of choice is empty. If I can’t predict which flavour you will buy based on knowing what flavours you like or what you want, you aren’t choosing in any meaningful sense at all. You’re just arbitrarily, capriciously, picking a flavour at random. Your “choice” doesn’t even contribute to your own benefit.
You keep thinking that and I’ll enjoy the delicious ice cream that I chose.
If it’s delicious, then any observer who knows what you consider delicious could have predicted what you chose. (Unless there are a few flavours that you deem exactly equally delicious, in which case it makes no difference, and you are choosing at random between them.)
Oh, no, it does make a difference for my flavour preferences are not stable and depend on a variety of things like my mood, the season, the last food I ate, etc. etc.
And all of those things are known by a sufficiently informed observer...
Show me one.
No need. It only needs to be possible for
to be true!
So how do you know what’s possible? Do you have data, by any chance? Pray tell!
Are you going to assert that your preferences are stored outside your brain, beyond the reach of causality? Perhaps in some kind of platonic realm?
Mood—check, that shows up in facial expressions, at least.
Season—check, all you have to do is look out the window, or look at the calendar.
Last food you ate—check, I can follow you around for a day, or just scan your stomach.
This line of argument really seems futile. Is it so hard to believe that your mind is made of parts, just like everything else in the universe?
So, show me.
OK. Thanks for clarifying.
So then just decide to one-box. You aren’t something outside of physics; you are part of physics and your decision is as much a part of physics as anything else. Your decision to one-box or two-box is determined by physics, true, but that’s not an excuse for not choosing! That’s like saying, “The future is already set in stone; if I get hit by a car in the street, that’s what was always going to happen. Therefore I’m going to stop looking both ways when I cross the street. After all, if I get hit, that’s what physics said was going to happen, right?”
Errr… can I? nshepperd says
so I don’t see anything I can do. Predestination is a bitch.
It’s not an excuse, it’s a reason. Que sera, sera—what will be will be. I don’t understand what is that “choosing” you speak of :-/
Yes, that’s what you are telling me. It’s just physics, right?
Um, not my decision again. It was predetermined whether I would look both ways or not.
Choosing is deliberation, deliberation is choosing. Just consider the alternatives (one-box, two-box) and do the one that results in you having more money.
The keyword here is decide. Just because you were always going to make that choice doesn’t mean you didn’t decide. You weighed up the costs and benefits of each option, didn’t you?
It really isn’t hard. Just think about it, then take one box.
Clearly thats two boxing. Omega already made his choice, so if he thought I’d two box, I’ll get;
-One box: nothing -two boxing: the small reward
if Omega thought I’d one box: -One box:big reward -two box: big reward + small reward
Two boxing results in more money no matter how Omega thought I’d chose.
Missing the Point: now a major motion picture.
Is that the drumbeat of nshepperd’s head against the desk that I hear..? :-D
What if I try to predict what Omega does, and do the opposite?
That would mean that either 1) there are some strategies I am incapable of executing, or 2) Omega can’t in principle predict what I do, since it is indirectly predicting itself.
Alternatively, what if instead of me trying to predict Omega, we run this with transparent boxes and I base my decision on what I see in the boxes, doing the opposite of what Omega predicted? Again, Omega is indirectly predicting itself.
I don’t see how this is relevant, but yes, in principle it’s impossible to predict the universe perfectly. On account of the universe + your brain is bigger than your brain. Although, if you live in a bubble universe that is bigger than the rest of the universe, whose interaction with the rest of the universe is limited precisely to your chosen manipulation of the connecting bridge; basically, if you are AIXI, then you may be able to perfectly predict the universe conditional on your actions.
This has pretty much no impact on actual newcomb’s though, since we can just define such problems away by making omega do the obvious thing to prevent such shenanigans (“trolls get no money”). For the purpose of the thought experiment, action-conditional predictions are fine.
IOW, this is not a problem with Newcomb’s. By the way, this has been discussed previously.
You’ve now destroyed the usefulness of Newcomb as a potentially interesting analogy to the real world. In real world games, my opponent is trying to infer my strategy and I’m trying to infer theirs.
If Newcomb is only about a weird world where omega can try and predict the player’s actions, but the player is not allowed to predict omega’s, then its sort of a silly problem. Its lost most of its generality because you’ve explicitly disallowed the majority of strategies.
If you allow the player to pursue his own strategy, then its still a silly problem, because the question ends up being inconsistent (because if omega plays omega, nothing can happen).
In real world games, we spend most our time trying to make action-conditional predictions. “If I play Foo, then my opponent will play Bar”. There’s no attempting to circularly predict yourself with unconditional predictions. The sensible formulation of Newcomb’s matches that.
(For example, transparent boxes: Omega predicts “if I fill both boxes, then player will ___” and fills the boxes based on that prediction. Or a few other variations on that.)
In many (probably most?) games we consider the opponents strategy, not simply their next move. Making moves in an attempt to confuse your opponent’s estimation of your own strategy is a common tactic in many games.
Your “modified Newcomb” doesn’t allow the chooser to have a strategy- they aren’t allowed to say “if I predict Omega did X, I’ll do Y.” Its a weird sort of game where my opponent takes my strategy into account, but something keeps me from considering my opponents.
Can’t Omega follow the strategy of ‘Trolls get no money,’ which by assumption is worse for you? I feel like this would result in some false positives, but perhaps not—and the scenario says nothing about the people who don’t get to play in any case.
No, because that’s fighting the hypothetical. Assume that he doesn’t do that.
It is actually approximately the opposite of fighting the hypothetical. It is managing the people who are trying to fight the hypothetical. Precise wording of the details of the specification can be used to preempt such replies but for casual defininitions that assume good faith sometimes explicit clauses for the distracting edge cases need to be added.
It is fighting the hypothetical because you are not the only one providing hypotheticals. I am too; I’m providing a hypothetical where the player’s strategy makes this the least convenient possible world for people who claim that having such an Omega is a self-consistent concept. Saying “no, you can’t use that strategy” is fighting the hypothetical.
Moreover, the strategy “pick the opposite of what I predict Omega does” is a member of a class of strategies that have the same problem; it’s just an example of such a strategy that is particularly clear-cut, and the fact that it is clear-cut and blatantly demonstrates the problem with the scenario is the very aspect that leads you to call it trolling Omega. “You can’t troll Omega” becomes equivalent to “you can’t pick a strategy that makes the flaw in the scenario too obvious”.
If your goal is to show that Omega is “impossible” or “inconsistent”, then having Omega adopt the strategy “leave both boxes empty for people who try to predict me / do any other funny stuff” is a perfectly legitimate counterargument. It shows that Omega is in fact consistent if he adopts such strategy. You have no right to just ignore that counterargument.
Indeed, Omega requires a strategy for when he finds that you are too hard to predict. The only reason such a strategy is not provided beforehand in the default problem description is because we are not (in the context of developing decision theory) talking about situations where you are powerful enough to predict Omega, so such a specification would be redundant. The assumption, for the purpose of illuminating problems with classical decision theory, is that Omega has vastly more computational resources than you do, so that the difficult decision tree that presents the problem will obtain.
By the way, it is extremely normal for there to be strategies you are “incapable of executing”. For example, I am currently unable to execute the strategy “predict what you will say next, and counter it first”, because I can’t predict you. Computation is a resource like any other.
If you are suggesting that Omega read my mind and think “does this human intend to outsmart me, Omega”, then sure he can do that. But that only takes care of the specific version of the strategy where the player has conscious intent.
If you’re suggesting “Omega figures out whether my strategy is functionally equivalent to trying to outsmart me”, you’re basically claiming that Omega can solve the halting problem by analyzing the situation to determine if it’s an instance of the halting problem, and outputting an appropriate answer if that is the case. That doesn’t work.
That still requires that he determine that I am too hard to predict, which either means solving the halting problem or running on a timer. Running on a timer is a legitimate answer, except again it means that there are some strategies I cannot execute.
I thought the assumption is that I am a perfect reasoner and can execute any strategy.
Why would this be the assumption?
There’s your answer.
I don’t see how omega running his simulation on a timer makes any difference for this, but either way this is normal and expected. Problem resolved.
Not at all. Though it may be convenient to postulate arbitrarily large computing power (as long as Omega’s power is increased to match) so that we can consider brute force algorithms instead of having to also worry about how to make it efficient.
(Actually, if you look at the decision tree for Newcomb’s, the intended options for your strategy are clearly supposed to be “unconditionally one-box” and “unconditionally two-box”, with potentially a mixed strategy allowed. Which is why you are provided wth no information whatsoever that would allow you to predict omega. And indeed the decision tree explicitly states that your state of knowledge is identical whether omega fills or doesn’t fill the box.)
It’s me who has to run on a timer. If I am only permitted to execute 1000 instructions to decide what my answer is, I may not be able to simulate Omega.
Yes, I am assuming that I am capable of executing arbitrarily many instructions when computing my strategy.
I know what problem Omega is trying to solve. If I am a perfect reasoner, and I know that Omega is, I should be able to predict Omega without actually having knowledge of Omega’s internals.
Deciding which branch of the decision tree to pick is something I do using a process that has, as a step, simulating Omega. It is tempting to say “it doesn’t matter what process you use to choose a branch of the decision tree, each branch has a value that can be compared independently of why you chose the branch”, but that’s not correct. In the original problem, if I just compare the branches without considering Omega’s predictions, I should always two-box. If I consider Omega’s predictions, that cuts off some branches in a way which changes the relative ranking of the choices. If I consider my predictions of Omega’s predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.
But apparently you want to ignore the part when I said Omega has to have his own computing power increased to match. The fact that Omega is vastly more intelligent and computationally powerful than you is a fundamental premise of the problem. This is what stops you from magically “predicting him”.
Look, in Newcomb’s problem you are not supposed to be a “perfect reasoner” with infinite computing time or whatever. You are just a human. Omega is the superintelligence. So, any argument you make that is premised on being a perfect reasoner is automatically irrelevant and inapplicable. Do you have a point that is not based on this misunderstanding of the thought experiment? What is your point, even?
It’s already arbitrary large. You want that expanded to match arbitrarily large?
Asking “which box should you pick” implies that you can follow a chain of reasoning which outputs an answer about which box to pick.
My decision making strategy is “figure out what Omega did and do the opposite”. It only fails to produce a useful result if Omega fails to produce a useful result (perhaps by trying to predict me and not halting). And Omega goes first, so we never get to the point where I try my decision strategy and don’t halt.
(And if you’re going to respond with “then Omega knows in advance that your decision strategy doesn’t halt”, how’s he going to know that?)
Furthermore, there’s always the transparent boxes situation. Instead of explicitly simulating Omega, I implicitly simulate Omega by looking in the transparent boxes and determining what Omega’s choice was.
That Omega cannot be a perfect predictor because being one no matter what strategy the human uses would imply being able to solve the halting problem.
When I say “arbitrarily large” I do not mean infinite. You have some fixed computing power, X (which you can interpret as “memory size” or “number of computations you can do before the sun explodes the next day” or whatever). The premise of newcomb’s is that Omega has some fixed computing power Q * X, where Q is really really extremely large. You can increase X as much as you like, as long as Omega is still Q times smarter.
Which does not even remotely imply being a perfect reasoner. An ordinary human is capable of doing this just fine.
Two points: If Omega’s memory is Q times large than yours, you can’t fit a simulation of him in your head. So predicting by simulation is not going to work. Second, If Omega has Q times as much computing time as you, you can try to predict him (by any method) for X steps, at which point the sun explodes. Naturally, Omega simulates you for X steps, notices that you didn’t give a result before the sun explodes, so leaves both boxes empty and flies away to safety.
Only under the artificial irrelevant-to-the-thought-experiment conditions that require him to care whether you’ll one-box or two-box after standing in front of the boxes for millions of years thinking about it. Whether or not the sun explodes, or Omega himself imposes a time limit, a realistic Omega only simulates for X steps, then stops. No halting-problem-solving involved.
In other words, if “Omega isn’t a perfect predictor” means that he can’t simulate a physical system for an infinite number of steps in finite time then I agree but don’t give a shit. Such a thing is entirely unneccessary. In the thought experiment, if you are a human, you die of aging after less than 100 years. And any strategy that involves you thinking in front of the boxes until you die of aging (or starvation, for that matter) is clearly flawed anyway.
This example is less stupid since it is not based on trying to circularly predict yourself. But in this case Omega just makes action-conditional predictions and fills the boxes however he likes.
It sounds like your decision making strategy fails to produce a useful result. That is unfortunate for anyone who happens to attempt to employ it. You might consider changing it to something that works.
“Ha! What if I don’t choose One box OR Two boxes! I can choose No Boxes out of indecision instead!” isn’t a particularly useful objection.
No, Nshepperd is right. Omega imposing computation limits on itself solves the problem (such as it is). You can waste as much time as you like. Omega is gone and so doesn’t care whether you pick any boxes before the end of time. This is a standard solution for considering cooperation between bounded rational agents with shared source code.
When attempting to achieve mutual cooperation (essentially what Newcomblike problems are all about) making yourself difficult to analyse only helps against terribly naive intelligences. ie. It’s a solved problem and essentially useless for all serious decision theory discussion about cooperation problems.
This contradicts the accuracy stated at the beginning. Omega can’t leave both boxes empty for people who try to adopt a mixed strategy AND also maintain his 99.whatever accuracy on one-boxers.
And even if Omega has way more computational than I do, I can still generate a random number. I can flip a coin thats 60⁄40 one-box, two-box. The most accurate Omega can be, then, is to assume I one box.
He can maintain his 99% accuracy on deterministic one-boxers, which is all that matters for the hypothetical.
Alternatively, if we want to explicitly include mixed strategies as an available option, the general answer is that Omega fills the box with probability = the probability that your mixed strategy one-boxes.
All of this is very true, and I agree with it wholeheartedly. However, I think Jiro’s second scenario is more interesting, because then predicting Omega is not needed; you can see what Omega’s prediction was just by looking in (the now transparent) Box B.
As I argued in this comment, however, the scenario as it currently is is not well-specified; we need some idea of what sort of rule Omega is using to fill the boxes based on his prediction. I have not yet come up with a rule that would allow Omega to be consistent in such a scenario, though, and I’m not sure if consistency in this situation would even be possible for Omega. Any comments?
Previous discussions of Transparent Newcomb’s problem have been well specified. I seem to recall doing so in footnotes so as to avoid distraction.
The problem (such as it is) is that there is ambiguity between the possible coherent specifications, not a complete lack. As your comment points out there are (merely) two possible situations for the player to be in and Omega is able to counter-factually predict the response to either of them, with said responses limited to a boolean. That’s not a lot of permutations. You could specify all 4 exhaustively if you are lazy.
IF (Two box when empty AND One box when full) THEN X
IF …
Any difficulty here is in choosing the set of rewards that most usefully illustrate the interesting aspects of the problem.
I’d say that about hits the nail on the head. The permutations certainly are exhaustively specifiable. The problem is that I’m not sure how to specify some of the branches. Here’s all four possibilities (written in pseudo-code following your example):
IF (Two box when empty And Two box when full) THEN X
IF (One box when empty And One box when full) THEN X
IF (Two box when empty And One box when full) THEN X
IF (One box when empty And Two box when full) THEN X
The rewards for 1 and 2 seem obvious; I’m having trouble, however, imagining what the rewards for 3 and 4 should be. The original Newcomb’s Problem had a simple point to demonstrate, namely that logical connections should be respected along with causal connections. This point was made simple by the fact that there’s two choices, but only one situation. When discussing transparent Newcomb, though, it’s hard to see how this point maps to the latter two situations in a useful and/or interesting way.
Option 3 is of the most interest to me when discussing the Transparent variant. Many otherwise adamant One Boxers will advocate (what is in effect) 3 when first encountering the question. Since I advocate strategy 2 there is a more interesting theoretical disagreement. ie. From my perspective I get to argue with (literally) less-wrong wrong people, with a correspondingly higher chance that I’m the one who is confused.
The difference between 2 and 3 becomes more obviously relevant when noise is introduced (eg. 99% accuracy Omega). I choose to take literally nothing in some situations. Some think that is crazy...
In the simplest formulation the payoff for three is undetermined. But not undetermined in the sense that Omega’s proposal is made incoherent. Arbitrary as in Omega can do whatever the heck it wants and still construct a coherent narrative. I’d personally call that an obviously worse decision but for simplicity prefer to define 3 as a defect (Big Box Empty outcome).
As for 4… A payoff of both boxes empty (or both boxes full but contaminated with anthrax spores) seems fitting. But simply leaving the large box empty is sufficient for decision theoretic purposes.
Out of interest, and because your other comments on the subject seem well informed, what do you choose when you encounter Transparent Newcomb and find the big box empty?
This is a question that I find confusing due to conflicting intuitions. Fortunately, since I endorse reflective consistency, I can replace that question with the following one, which is equivalent in my decision framework, and which I find significantly less confusing:
“What would you want to precommit to doing, if you encountered transparent Newcomb and found the big box (a.k.a. Box B) empty?”
My answer to this question would be dependent upon Omega’s rule for rewarding players. If Omega only fills Box B if the player employs the strategy outlined in 2, then I would want to precommit to unconditional one-boxing—and since I would want to precommit to doing so, I would in fact do so. If Omega is willing to reward the player by filling Box B even if the player employs the strategy outlined in 3, then I would see nothing wrong with two-boxing, since I would have wanted to precommit to that strategy in advance. Personally, I find the former scenario—the one where Omega only rewards people who employ strategy 2--to be more in line with the original Newcomb’s Problem, for some intuitive reason that I can’t quite articulate.
What’s interesting, though, is that some people two-box even upon hearing that Omega only rewards the strategy outlined in 2--upon hearing, in other words, that they are in the first scenario described in the above paragraph. I would imagining that their reasoning process goes something like this: “Omega has left Box B empty. Therefore he has predicted that I’m going to two-box. It is extremely unlikely a priori that Omega is wrong in his predictions, and besides, I stand to gain nothing from one-boxing now. Therefore, I should two-box, both because it nets me more money and because Omega predicted that I would do so.”
I disagree with this line of reasoning, however, because it is very similar to the line of reasoning that leads to self-fulfilling prophecies. As a rule, I don’t do things just because somebody said I would do them, even if that somebody has a reputation for being extremely accurate, because then that becomes the only reason it happened in the first place. As with most situations involving acausal reasoning, however, I can only place so much confidence in me being correct, as opposed to me being so confused I don’t even realize I’m wrong.
It would seem to me that Omega’s actions would be as follows:
IF (Two box when empty And Two box when full) THEN Empty
IF (One box when empty And One box when full) THEN Full
IF (Two box when empty And One box when full) THEN Empty or Full
IF (One box when empty And Two box when full) THEN Refuse to present boxes
Cases 1 and 2 are straightforward. Case 3 works for the problem, no matter which set of boxes Omega chooses to leave.
In order for Omega to maintain its high prediction accuracy, though, it is necessary—if Omega predicts that a given player will choose option 4 - that Omega simply refuse to present the transparent boxes to this player. Or, at least, that the number of players who follow the other three options should vastly outnumber the fourth-option players.
This is an interesting response because 4 is basically what Jiro was advocating earlier in the thread, and you’re basically suggesting that Omega wouldn’t even present the opportunity to people who would try to do that. Would you agree with this interpretation of your comment?
Yes, I would.
If we take the assumption, for the moment, that the people who would take option 4 form at least 10% of the population in general (this may be a little low), and we further take the idea that Omega has a track record of success in 99% or more of previous trials (as is often specified in Newcomb-like problems), then it is clear that whatever algorithm Omega is using to decide who to present the boxes to is biased, and biased heavily, against offering the boxes to such a person.
Consider:
P(P) = The probability that Omega will present the boxes to a given person.
P(M|P) = The probability that Omega will fill the boxes correctly (empty for a two-boxer, full for a one-boxer) P(M’|P) = The probability that Omega will fail to fill the boxes correctly
P(O) = The probability that the person will choose option 4
P (M’|O) = 1 (from the definition of option 4) therefore P(M|O) = 0
and if Omega is a perfect predictor, then P(M|O’) = 1 as well.
P (M|P) = 0.99 (from the statement of the problem)
P (O) = 0.1 (assumed)
Now, of all the people to whom boxes are presented, Omega is only getting at most one percent wrong; P(M’|P) ⇐ 0.01. Since P(M’|O) = 1, and P(M’|O’)=0, it follows that P(P|O) ⇐ 0.01.
If Omega is a less than perfect predictor, then P(M’|O’)>0, and P(P|O)<0.01.
And, since P(P|O) = 0.01 < P(O) = 0.1, I therefore conclude that Omega must have a bias—and a fairly strong one—against presenting the boxes to such perverse players.
It may be the least convenient possible world. More specifically it is the minor inconvenience of being careful to specify the problem correctly so as not to be distracted. Nshepperd gives some of the reason typically used in such cases.
What happens when you try to pick the the opposite of what you predict Omega does is something like what happens when you try to beat Deep Fritz 14 at chess while outrunning a sports car. You just fail. Your brain is a few of pounds of fat approximately optimised for out-competing other primates for mating opportunities. Omega is a super-intelligence. The assumption that Omega is smarter than the player isn’t an unreasonable one and is fundamental to the problem. Defying it is a particularly futile attempt to fight the hypothetical by basically ignoring it.
Generalising your proposed class to executing maximally inconvenient behaviours in response to, for example, the transparent Newcomb’s problem is where it gets actually gets (tangentially) interesting. In that case you can be inconvenient without out-predicting the superintelligence and so the transparent Newcomb’s problem requires more care with the if clause.
In the first scenario, I doubt you would be able to predict Omega with sufficient accuracy to be able to do what you’re suggesting. Transparent boxes, though, are interesting. The problem is, the original Newcomb’s Problem had a single situation with two possible choices involved; tranparent Newcomb, however, involves two situations:
Transparent Box B contains $1000000.
Transparent Box B contains nothing.
It’s unclear from this what Omega is even trying to predict; is he predicting your response to the first situation? The second one? Both? Is he following the rule: “If the player two-boxes in either situation, fill Box B with nothing”? Is he following the rule: “If the player one-boxes in either situation, fill Box B with $1000000″? The problem isn’t well-specified; you’ll have to give a better description of the situation before a response can be given.
That falls under 1) there are some strategies I am incapable of executing.
The transparent scenario is just a restatement of the opaque scenario with transparent boxes instead of “I predict what Omega does”. If you think the transparent scenario involves two situations, then the opaque scenario involves two situations as well. (1=opaque box B contains $1000000, and I predict that Omega put in $1000000 and 2=opaque box B contains nothing, and I predict that Omega puts in nothing.) If you object that we have no reason to think both of those opaque situations are possible, I can make a similar objection to the transparent situations.
Yes, it does, for the meaning of “decide” that I use.
LOL. It really isn’t hard. Just think about it, then accept Jesus as your personal saviour… X-)
Or think about it, then take two boxes.
Either way, you decide how much money you get, and the contents of the boxes are your fault.
What I’ve done for Newcomb problems is that I’ve precommitted to one boxing, but then I’ve paid a friend to follow me at all times. Just before I chose the boxes, he is to perform complicated neurosurgery to turn me into a two boxer. That way I maximize my gain.
That’s clever, but of course it won’t work. Omega can predict the outcome of neurosurgery.
Better wipe my memory of getting my friend to follow me then.
Also, I have built a second Omega, and given it to others. They are instructed to two box if 2 Omega predicts 1 Omega thinks they’ll one box, and visa versa.
.. and that costs less than $1000?
If I am predestined, nope, not my fault. In fact, in the full determinism case I’m not sure there’s “me” at all.
But anyway, how about that—you introduce me to Omega first, and I’ll think about his two boxes afterwards...
So the next time you cross the street, are you going to look both ways or not? You can’t calculate the physical consequences of every particle interaction taking place in your brain, so taking the route the universe takes, i.e. just let every play out at the lowest level, is not an option for you and your limited processing power. And yet, for some reason, I suspect you’ll probably answer that you will look both ways, despite being unable to actually predict your brain-state at the time of crossing the street. So if you can’t actually predict your decisions perfectly as dictated by physics… how do you know that you’ll actually look both ways next time you cross the street?
The answer is simple: you don’t know for certain. But you know that, all things being equal, you prefer not getting hit by a car to getting hit by a car. And looking both ways helps to lower the probability of getting hit by a car. Therefore, given knowledge of your preferences and your decision algorithm, you will choose to look both ways.
Note that nowhere in the above explanation was determinism violated! Every step of the physics plays out as it should… and yet we observe that your choice still exists here! Determinism explains free will, not explains it away; just because everything is determined doesn’t mean your choice doesn’t exist! You still have to choose; if I ask you if you were forced to reply to my comment earlier by the Absolute Power of Determinism, or if you chose to write that comment of your own accord, I suspect you’ll answer the latter.
Likewise, Omega may have predicted your decision, but that decision still falls to you to make. Just because Omega predicted what you would do doesn’t mean you can get away with not choosing, or choosing sub-optimally. If I said, “I predict that tomorrow Lumifer will jump off a cliff,” would you do it? Of course not. Conversely, if I said, “I predict that tomorrow Lumifer will not jump off a cliff,” would you do it? Still of course not. Your choice exists regardless of whether there’s some agent out there predicting what you do.
Well, actually, it depends. Descending from flights of imagination down to earth, I sometimes look and sometimes don’t. How do I know there isn’t a car coming? In some cases hearing is enough. It depends.
You are mistaken. If my actions are predetermined, I chose nothing. You may prefer to use the word “choice” within determinism, I prefer not to.
Yes, it does mean that. And, I’m afraid, just you asserting something—even with force—doesn’t make it automatically true.
Of course, but that’s not we are talking about. We are talking about whether choice exists at all.
Okay, it seems like we’re just arguing definitions now. Taboo “choice” and any synonyms. Now that we have done that, I’m going to specify what I mean when I use the word “choice”: the deterministic output of your decision algorithm over your preferences given a certain situation. If there is something in this definition that you feel does not capture the essence of “choice” as it relates to Newcomb’s Problem, please point out exactly where you think this occurs, as well as why it is relevant in the context of Newcomb’s Problem. In the meantime, I’m going to proceed with this definition.
So, in the above quote of mine, replacing “choice” with my definition gives you:
We see that the above quote is trivially true, and I assert that “the deterministic output of your decision algorithm over your preferences given a certain situation” is what matters in Newcomb’s Problem. If you have any disagreements, again, I would ask that you outline exactly what those disagreements are, as opposed to providing qualitative objections that sound pithy but don’t really move the discussion forward. Thank you in advance for your time and understanding.
Sure, you can define the word “choice” that way. The problem is, I don’t have that. I do not have a decision algorithm over my preferences that produces some deterministic output given a certain situation. Such a thing does not exist.
You may define some agent for whom your definition of “choice” would be valid. But that’s not me, and not any human I’m familiar with.
What is your basis for arguing that it does not exist?
What makes humans so special as to exempted from this?
Keep in mind that my goal here is not perpetuate disagreement or to scold you for being stupid; it’s to resolve whatever differences in reasoning are causing our disagreement. Thus far, your comments have been annoyingly evasive and don’t really help me understand your position better, which has caused me to update toward you not actually having a coherent position on this. Presumably, you think you do have a coherent position, in which case I’d be much gratified if you’d just lay out everything that leads up to your position in one fell swoop rather than forcing myself and others to ask questions repeatedly in hope of clarification. Thank you.
I think it became clear that this debate is pointless the moment proving determinism became a prerequisite for getting anywhere.
I did try a different approach, but that was mostly dodged. I suspect Lumifer wants determinism to be a prerequisite; the freedom to do that slippery debate dance of theirs is so much greater then.
Either way, yeah. I’d let this die.
Introspection.
What’s your basis for arguing that it does exist?
Tsk, tsk. Such naked privileging of an assertion.
Well, the differences are pretty clear. In simple terms, I think humans have free will and you think they don’t. It’s quite an old debate, at least a couple of millennia old and maybe more.
I am not quite sure why do you have difficulties accepting that some people think free will exists. It’s not a that unusual position to hold.
No offense, but this is a textbook example of an answer that sounds pithy but tells me, in a word, nothing. What exactly am I supposed to get out of this? How am I supposed to argue against this? This is a one-word answer that acts as a blackbox, preventing anyone from actually getting anything worthwhile out of it—just like “emergence”. I have asked you several times now to lay out exactly what your disagreement is. Unless you and I have wildly varying definitions of the word “exactly”, you have repeatedly failed to do so. You have displayed no desire to actually elucidate your position to the point where it would actually be arguable. I would characterize your replies to my requests so far as a near-perfect example of logical rudeness. My probability estimate of you actually wanting to go somewhere with this conversation is getting lower and lower...
This is a thinly veiled expression of contempt that again asserts nothing. The flippancy this sort of remark exhibits suggests to me that you are more interested in winning than in truth-seeking. If you think I am characterizing your attitude uncharitably, please feel free to correct me on this point.
Taboo “free will” and try to rephrase your argument without ever using that phrase or any synonymous terms/phrases. (An exception would be if you were trying to refer directly to the phrase, in which case you would put it in quotation marks, e.g. “free will”.) Now then, what were you saying?
You are supposed to get out of this that you’re asking me to prove a negative and I don’t see a way to do this other than say “I’ve looked and found nothing” (aka introspection). How do you expect me to prove that I do NOT have a deterministic algorithm running my mind?
You are not supposed to argue against this. You are supposed to say “Aha, so this a point where we disagree and there doesn’t appear to be a way to prove it one way or another”.
From my point of view you repeatedly refused to understand what I’ve been saying. You spent all your time telling me, but not listening.
Oh, it does. It asserts that you are treating determinism as a natural and default answer and the burden is upon me to prove it wrong. I disagree.
Why? This is the core of my position. If you think I’m confused by words, tell me how am I confused. It the problem that you don’t understand me? I doubt this.
Are you talking about libertarian free will? The uncaused causer? I would have hoped that LWers wouldn’t believe such absurd things. Perhaps this isn’t the right place for you if you still reject reductionism.
There is such a thing as naturalistic Libertariansm)
LOL. Do elaborate, it’s going to be funny :-)
If you’re just going to provide every banal objection that you can without evidence or explanation in order to block discussion from moving forward, you might as well just stop posting.
It’s common to believe that we have the power to “change” the future but not the past. Popular conceptions of time travel such as Back To The Future show future events wavering in and out of existence as people deliberate about important decisions, to the extent of having a polaroid from the future literally change before our eyes.
All of this, of course, is a nonsense in deterministic physics. If any part of the universe is “already” determined, it all is (and by the way quantum “uncertainty” doesn’t change this picture in any interesting way). So there is not much difference between controlling the past and controlling the future, except that we don’t normally get an opportunity to control the past, due to the usual causal structure of the universe.
In other words, the boxes are “already set up and unchangeable” even if you decide before being scanned by Omega. But you still get to decide whether they are unchangeable in a favourable or unfavourable way.
That’s the free-will debate. Does the “solution” to one-box depend on rejection of free will?
Do you believe that objects in the future waver in and out of existence as you deliberate?
(On the free will debate: The common conception of free will is confused. But that doesn’t mean our will isn’t free, or imply fatalism.)
I am aware of the LW (well, EY’s, I guess) position on free will. But here we are discussing the Newcomb’s Problem. We can leave free will to another time. Still, what about my question?
Well, if that’s true—that is, if whether you are the sort of person who one-boxes or two-boxes in Newcomblike problems is a fixed property of Lumifer that you can’t influence in any way—then you’re right that there’s no point to thinking about which choice is best with various different predictors. After all, you can’t make a choice about it, so what difference does it make which choice would be better if you could?
Similarly, in most cases, given a choice between accelerating to the ground at 1G and doing so at .01 G once I’ve fallen off a cliff, I would do better to choose the latter… but once I fall off the cliff, I don’t actually have a choice, so that doesn’t matter at all.
Many people who consider it useful to think about Newcomblike problems, by contrast, believe that there is something they can do about it… that they do indeed make a choice to be a “two-boxer” or a “one-boxer.”
It’s not a fixed property, it’s undetermined. Go ask a random person whether he one-boxes or two-boxes :-)
Correction accepted. Consider me to have repeated the comment with the word “fixed” removed, if you wish. Or not, if you prefer.
I don’t anticipate meeting Omega and his two boxes. Therefore I don’t find pre-committing to a particular decision in this situation useful.
I’m not sure I understand.
Earlier, you seemed to be saying that you’re incapable of making such a choice. Now, you seem to be saying that you don’t find it useful to do so, which seems to suggest… though not assert… that you can.
So, just to clarify: on your view, are you capable of precommitting to one-box or two-box? And if so, what do you mean when you say that you can’t make a choice to be a “one-boxer”—how is that different from precommitting to one-box?
I, personally, have heard of the Newcomb’s Problem so one can argue that I am capable of pre-committing. However a tiny minority of the world’s population have heard of that problem and, as far as I know, the default formulation of the Newcomb’s Problem assumes that the subject had no advance warning. Therefore in general case there is no pre-committment and the choice does not exist.
So, I asked:
You have answered that “one can argue that” you are capable of it.
Which, well, OK, that’s probably true.
One could also argue that you aren’t, I imagine.
So… on your view, are you capable of precommitting?
Because earlier you seemed to be saying that you weren’t able to.
I think you’re now saying that you can (but that other people can’t).
But it’s very hard to tell.
I can’t tell whether you’re just being slippery as a rhetorical strategy, or whether I’ve actually misunderstood you.
That aside: it’s not actually clear to me that precommitting to oneboxing is necessary. The predictor doesn’t require me to precommit to oneboxing, merely to have some set of properties that results in me oneboxing. Precommitment is a simple example of such a property, but hardly the only possible one.
I can precommit, but I don’t want to. Other people (in the general case) cannot precommit because they have no idea about the Newcomb’s Problem.
Sure, but that has nothing to do with my choices.
See, that’s where I disagree. If you choose to one-box, even if that choice is made on a whim right before you’re required to select a box/boxes, Omega can predict that choice with accuracy. This isn’t backward causation; it’s simply what happens when you have a very good predictor. The problem with causal decision theory is that it neglects these sorts of acausal logical connections, instead electing to only keep track of casual connections. If Omega can predict you with high-enough accuracy, he can predict choices that you would make given certain information. If you take a random passerby and present them with a formulation of Newcomb’s Problem, Omega can analyze that passerby’s disposition and predict in advance how that passerby’s disposition will affect his/her reaction to that particular formulation of Newcomb’s problem, including whether he/she will two-box or one-box. Conscious precommitment is not required; the only requirement is that you make a choice. If you or any other person chooses to one-box, regardless of whether they’ve previously heard of Newcomb’s Problem or made a precommitment, Omega will predict that decision with whatever accuracy we specify. Then the only questions are “How high of an accuracy do we need?”, followed by “Can humans reach this desired level of accuracy?” And while I’m hesitant to provide an absolute threshold for the first question, I do not hesitate at all to answer the second question with, “Yes, absolutely.” Thus we see that Newcomb-like situations can and do pop up in real life, with merely human predictors.
If there are any particulars you disagree with in the above explanation, please let me know.
Sure, I agree, Omega can do that.
However when I get to move, when I have the opportunity to make a choice, Omega is already done with his prediction. Regardless of what his prediction was, the optimal choice for me after Stage 1 is to two-box.
My choice cannot change what’s in the boxes—only Omega can determine what’s in the boxes and I have no choice with respect to his prediction.
Well, if you reason that way, you will end up two-boxing. And, of course, Omega will know that you will end up two-boxing. Therefore, he will put nothing in Box B. If, on the other hand, you had chosen to one-box instead, Omega would have known that, too. And he would have put $1000000 in Box B. If you say, “Oh, the contents of the boxes are already fixed, so I’m gonna two-box!”, there is not going to be anything in Box B. It doesn’t matter what reasoning you use to justify two-boxing, or how elaborate your argument is; if you end up two-boxing, you are going to get $1000 with probability (Omega’s-predictive-power)%. Sure, you can say, “The boxes are already filled,” but guess what? If you do that, you’re not going to get any money. (Well, I mean, you’ll get $1000, but you could have gotten $1000000.) Remember, the goal of a rationalist is to win. If you want to win, you will one-box. Period.
Notice the tense you are using: “had chosen”. When did that choice happen? (for a standard participant)
You chose to two-box in this hypothetical Newcomb’s Problem when you said earlier in this thread that you would two-box. Fortunately, since this is a hypothetical, you don’t actually gain or lose any utility from answering as you did, but had this been a real-life Newcomb-like situation, you would have. If (I’m actually tempted to say “when”, but that discussion can be held another time) you ever encounter a real-life Newcomb-like situation, I strongly recommend you one-box (or whatever the equivalent of one-boxing is in that situation).
I don’t believe real-life Newcomb situations exist or will exist in my future.
I also think that the local usage of “Newcomb-like” is misleading in that it is used to refer to situations which don’t have much to do with the classic Newcomb’s Problem.
You recommendation was considered and rejected :-)
It is my understanding that Newcomb-like situations arise whenever you deal with agents who possess predictive capabilities greater than chance. It appears, however, that you do not agree with this statement. If it’s not too inconvenient, could you explain why?
Can you define what is a “Newcomb-like” situation and how can I distinguish such from a non-Newcomb-like one?
You have elsewhere agreed that you (though not everyone) have the ability to make choices that affect Omega’s prediction (including, but not limited to, the choice of whether or not to precommit to one-boxing).
That seems incompatible with your claim that all of your relevant choices are made after Omega’s prediction.
Have you changed your mind? Have I misunderstood you? Are you making inconsistent claims in different branches of this conversation? Do you not see an inconsistency? Other?
Here when I say “I” I mean “a standard participant in the classic Newcomb’s Problem”. A standard participant has no advance warning.
Ah. OK. And just to be clear: you believe that advance warning is necessary in order to decide whether to one-box or two-box… it simply isn’t possible, in the absence of advance warning, to make that choice; rather, in the absence of advance warning humans deterministically two-box. Have I understood that correctly?
Correct.
Nope. I think two-boxing is the right thing to do but humans are not deterministic, they can (and do) all kinds of stuff. If you run an empirical test I think it’s very likely that some people will two-box and some people will one-box.
Gotcha: they don’t have a choice in which they do, on your account, but they might do one or the other. Correction accepted.
Incidentally, for the folks downvoting Lumifer here, I’m curious as to your reasons. I’ve found many of their earlier comments annoyingly evasive, but now they’re actually answering questions clearly. I disagree with those answers, but that’s another question altogether.
There are a lot of behaviorists here. If someone doesn’t see the light, apply electric prods until she does X-)
It would greatly surprise me if anyone here believed that downvoting you will influence your behavior in any positive way.
You think it’s just mood affiliation, on a rationalist forum? INCONCEIVABLE! :-D
I’m curious: do you actually believe I think that, or are you saying it for some other reason?
Either way: why?
A significant part of the time I operate in the ha-ha only serious mode :-)
The grandparent post is a reference to a quote from Princess Bride.
Yes, you do, and I understand the advantages of that mode in terms of being able to say stuff without being held accountable for it.
I find it annoying.
That said, you are of course under no obligation to answer any of my questions.
In which way am I not accountable? I am here, answering questions, not deleting my posts.
Sure, I often prefer to point to something rather than plop down a full specification. I am also rather fond of irony and sarcasm. But that’s not exactly the same thing as avoiding accountability, is it?
If you want highly specific answers, ask highly specific questions. If you feel there is ambiguity in the subject, resolve it in the question.
OK. Thanks for clarifying your position.
If you said earlier in this thread that you would two-box, you are a two-boxer. If you said earlier in this thread that you would one-box, you are a one-boxer. If Omega correctly predicts your status as a one-boxer/two-boxer, he will fill Box B with the appropriate amount. Assuming that Omega is a good predictor, his prediction is contingent on your disposition as a one-boxer or a two-boxer. This means you can influence Omega’s prediction (and thus the contents of the boxes) simply by choosing to be a one-boxer. If Omega is a good-enough predictor, he will even be able to predict future changes in your state of mind. Therefore, the decision to one-box can and will affect Omega’s prediction, even if said decision is made AFTER Omega’s prediction.
This is the essence of being a reflectively consistent agent, as opposed to a reflectively inconsistent agent. For an example of an agent that is reflectively inconsistent, see causal decision theory. Let me know if you still have any qualms with this explanation.
Oh, I can’t change my mind? I do that on regular basis, you know...
This implies that I am aware that I’ll face the Newcomb’s problem.
Let’s do the Newcomb’s Problem with a random passer-by picked from the street—he has no idea what’s going to happen to him and has never heard of Omega or the Newcomb’s problem before. Omega has to make a prediction and fill the boxes before that passer-by gets any hint that something is going to happen.
So, Step 1 happens, the boxes are set up, and our passer-by is explained the whole game. What should he do? He never chose to be a one-boxer or a two-boxer because he had no idea such things existed. He can only make a choice now and the boxes are done and immutable. Why should he one-box?
It seems unlikely to me that you would change your mind about being a one-boxer/two-boxer over the course of a single thread. Nevertheless, if you did so, I apologize for making presuppositions.
As I wrote in my earlier comment:
If our hypothetical passerby chooses to one-box, then to Omega, he is a one-boxer. If he chooses to two-box, then to Omega, he is a two-boxer. There’s no “not choosing”, because if you make a choice about what to do, you are choosing.
The only problem is that you have causality going back in time. At the time of Omega’s decision the passer-by’s state with respect to one- or two-boxing is null, undetermined, does not exist. Omega can scan his brain or whatever and make his prediction, but the passer-by is not bound by that prediction and has not (yet) made any decisions.
The first chance our passer-by gets to make a decision is after the boxes are fixed. His decision (as opposed to his personality, preferences, goals, etc.) cannot affect Omega’s prediction because causality can’t go backwards in time. So at this point, after step 2, the only time he can make a decision, he should two-box.
As far as I’m aware, what you’re saying is basically the same thing as what causal decision theory says. I hate to pass the buck, but So8res has written a very good post on this already; anything I could say here has already been said by him, and better. If you’ve read it already, then I apologize; if not, I’d say give it a skim and see what you think of it.
So8res’ post points out that
It seems I’m in good company :-)
Unless you like money and can multiply, in which case you one box and end up (almost but not quite certainly) richer.
Wat iz zat “multiply” thang u tok abut?