Newcomb’s Problem: A problem for Causal Decision Theories
This is part of a sequence titled, “Introduction to decision theory”
The previous post is “An introduction to decision theory”
In the previous post I introduced evidential and causal decision theories. The principle question that needs resolving with regards to these is whether using these decision theories leads to making rational decisions. The next two posts will show that both causal and evidential decision theories fail to do so and will try to set the scene so that it’s clear why there’s so much focus given on Less Wrong to developing new decision theories.
Newcomb’s Problem
Newcomb’s Problem asks us to imagine the following situation:
Omega, an unquestionably honest, all knowing agent with perfect powers of prediction, appears, along with two boxes. Omega tells you that it has placed a certain sum of money into each of the boxes. It has already placed the money and will not now change the amount. You are then asked whether you want to take just the money that is in the left hand box or whether you want to take the money in both boxes.
However, here’s where it becomes complicated. Using its perfect powers of prediction, Omega predicted whether you would take just the left box (called “one boxing”) or whether you would take both boxes (called “two boxing”).Either way, Omega put $1000 in the right hand box but filled the left hand box as follows:
If he predicted you would take only the left hand box, he put $1 000 000 in the left hand box.
If he predicted you would take both boxes, he put $0 in the left hand box.
Should you take just the left hand box or should you take both boxes?
An answer to Newcomb’s Problem
One argument goes as follows: By the time you are asked to choose what to do, the money is already in the boxes. Whatever decision you make, it won’t change what’s in the boxes. So the boxes can be in one of two states:
Left box, $0. Right box, $1000.
Left box, $1 000 000. Right box, $1000.
Whichever state the boxes are in, you get more money if you take both boxes than if you take one. In game theoretic terms, the strategy of taking both boxes strictly dominates the strategy of taking only one box. You can never lose by choosing both boxes.
The only problem is, you do lose. If you take two boxes then they are in state 1 and you only get $1000. If you only took the left box you would get $1 000 000.
To many people, this may be enough to make it obvious that the rational decision is to take only the left box. If so, you might want to skip the next paragraph.
Taking only the left box didn’t seem rational to me for a long time. It seemed that the reasoning described above to justify taking both boxes was so powerful that the only rational decision was to take both boxes. I therefore saw Newcomb’s Problem as proof that it was sometimes beneficial to be irrational. I changed my mind when I realized that I’d been asking the wrong question. I had been asking which decision would give the best payoff at the time and saying it was rational to make that decision. Instead, I should have been asking which decision theory would lead to the greatest payoff. From that perspective, it is rational to use a decision theory that suggests you only take the left box because that is the decision theory that leads to the highest payoff. Taking only the left box lead to a higher payoff and it’s also a rational decision if you ask, “What decision theory is it rational for me to use?” and then make your decision according to the theory that you have concluded it is rational to follow.
What follows will presume that a good decision theory should one box on Newcomb’s problem.
Causal Decision Theory and Newcomb’s Problem
Remember that decision theory tells us to calculate the expected utility of an action by summing the utility of each possible outcome of that action multiplied by its probability. In Causal Decision Theory, this probability is defined causally (something that we haven’t formalized and won’t formalise in this introductory sequence but which we have at least some grasp of). So Causal Decision Theory will act as if the probability that the boxes are in state 1 or state 2 above is not influenced by the decision made to one or two box (so let’s say that the probability that the boxes are in state 1 is P and the probability that they’re in state 2 is Q regardless of your decision).
So if you undertake the action of choosing only the left box your expected utility will be equal to: (0 x P) + (1 000 000 x Q) = 1 000 000 x Q
And if you choose both boxes, the expected utility will be equal to: (1000 x P) + (1 001 000 x Q).
So Causal Decision Theory will lead to the decision to take both boxes and hence, if you accept that you should one box on Newcomb’s Problem, Causal Decision Theory is flawed.
Evidential Decision Theory and Newcomb’s Problem
Evidential Decision Theory, on the other hand, will take your decision to one box as evidence that Omega put the boxes in state 2, to give an expected utility of (1 x 1 000 000) + (0 x 0) = 1 000 000.
It will similarly take your decision to take both boxes as evidence that Omega put the boxes into state 1, to give an expected utility of (0 x (1 000 000 + 1000)) + (1 x (0 + 1000)) = 1000
As such, Evidential Decision Theory will suggest that you one box and hence it passes the test posed by Newcomb’s Problem. We will look at a more challenging scenario for Evidential Decision Theory in the next post. For now, we’re part way along the route of realising that there’s still a need to look for a decision theory that makes the logical decision in a wide range of situations.
Appendix 1: Important notes
While the consensus on Less Wrong is that one boxing on Newcomb’s Problem is the rational decision, my understanding is that this opinion is not necessarily held uniformly amongst philosophers (see, for example, the Stanford Encyclopedia of Philosophy’s article on Causal Decision Theory). I’d welcome corrections on this if I’m wrong but otherwise it does seem important to acknowledge where the level of consensus differs on Less Wrong compared to the broader community.
For more details on this, see the results of the PhilPapers Survey where 61% of respondents who specialised in decision theory chose to two box and only 26% chose to one box (the rest were uncertain). Thanks to Unnamed for the link.
If Newcomb’s Problem doesn’t seem realistic enough to be worth considering then read the responses to this comment.
Appendix 2: Existing posts on Newcomb’s Problem
Newcomb’s Problem has been widely discussed on Less Wrong, generally by people with more knowledge on the subject than me (this post is included as part of the sequence because I want to make sure no-one is left behind and because it is framed in a slightly different way). Good previous posts include:
A post by Eliezer introducing the problem and discussing the issue of whether one boxing is irrational.
A link to Marion Ledwig’s detailed thesis on the issue.
An exploration of the links between Newcomb’s Problem and the prisoner’s dillemma.
A post about formalising Newcomb’s Problem.
And a Less Wrong wiki article on the problem with further links.
- An introduction to decision theory by 13 Aug 2010 9:09 UTC; 25 points) (
- Desirable Dispositions and Rational Actions by 17 Aug 2010 3:20 UTC; 18 points) (
- How can we compare decision theories? by 18 Aug 2010 13:29 UTC; 9 points) (
- The Smoking Lesion: A problem for evidential decision theory by 23 Aug 2010 9:01 UTC; 6 points) (
- The Expected Value Approach to Newcomb’s Problem by 5 Aug 2011 7:09 UTC; 2 points) (
- My Expected Value Approach to Newcomb’s Problem by 5 Aug 2011 7:24 UTC; -3 points) (
That’s correct. See, for instance, the PhilPapers Survey of 931 philosophy professors, which found that only 21% favored one boxing vs. 31% who favored two boxing; 43% said other (mostly undecided or insufficiently familiar with the issue). Among the 31 philosophers who specialize in decision theory, there was a big shift from other (down to 13%) to two boxing (up to 61%), and still only 26% favored one boxing.
I’m not sure I actually believe this survey. Sure, these people claim they’d two box in academic papers, and in surveys—that’s easy enough to do—but would any of them actually be committed enough to two-boxing to turn down $1 million if they every found themselves in the actual set-up?
My feelings are the opposite. I’m committed to one-boxing (just in case Omega is scanning my brain right now), but I’m not at all sure that I’d stick to that commitment with a box of $1000 sitting right there in front of me free for the taking. (Don’t listen, Omega, move on, nothing to see here).
An issue that often occurs to me when discussing these questions. I one-box, cooperate in one-shot PD’s, and pay the guy who gave me a lift out of the desert. I’ve no idea what decision theory I’m using when I decide to do these things, but I still know that that I’d do them. I’m pretty sure that’s what most other people would do as well.
Does anyone actually know how Human Decision Theory works? I know there are examples of problems where HDT fails miserably and CDT comes out better, but is there a coherent explanation for how HDT manages to get all of these problems right? Has anyone attempted to develop a decision theory which successfully solves these sorts of problems by mimicking the way in which people successfully solve them?
I don’t think most people one-box. Maybe most LW readers one-box.
I have two real boxes, labelled with Newcomb’s problem and using 1 and 4 quarters in place of the $10k and $1M. I have shown them to people at Less Wrong meetups, and also to various friends of mine, a total of about 20 people.
Almost everyone I’ve tried it on has one-boxed. Even though I left out the part in the description about being a really accurate predictor, and pre-seeded the boxes before I even knew who would be the one choosing. Maybe it would be different with $10k instead of $0.25. Maybe my friends are unusual and a different demographic would two-box. Maybe it’s due to a quirk of how I present them. But unless someone presents contrary evidence, I have to conclude that most people are one-boxers.
What?!? You offer people two boxes with essentially random amounts of money in them, and they choose to take one of the boxes instead of both? And these people are otherwise completely sane?
Could you maybe give us details of how exactly you present the problem? I can’t imagine any presentation that would make anyone even slightly tempted to one-box this variant. (Maybe if I knew I’d get to play again one day...)
That seems bizarre to me too. But if Jimrandomh is filling his boxes on the basis of what most people would do, and most people do one-box, then perhaps they are just behaving as rational, highly correlated, timeless decisionmakers.
A signalling explanation might explain this behavior: people would rather be seen as having gotten the problem correct, or signal non-greediness, than get an extra $0.25. As evidence for this conclusion, some people turn down the $1.00 in box one.
No one’s given the real correct solution, which is “inspect the boxes more thoroughly”. One of them has an extra label on the bottom, offering an extra $1.00 for finding it if you haven’t opened any boxes yet, which I’ve never had to pay out on. The moral is supposed to be that theory is hard to transfer into the real world and to question assumptions.
You let people inspect the boxes? Wouldn’t they be distinguishable by weight?
Weird. I two-box on that variant.
Reminds me of a story, set in a lazy Mark Twain river town. Two friends walking down the street. First says to second, “See that kid? He is really stupid.” Second asks, “Why do you say that?” First answers, “Watch”. Approaches kid. Holds out nickel in one hand and dime in the other. Asks kid which he prefers. “I’ll take the nickel. It’s bigger”. Man hands nickel to kid with smirk, and the two friends continue on.
Later the second man comes back and attempts to instruct the kid. “A dime is worth twice the value, that is it buys more candy”, says he, “even though the nickel looks bigger.” The kid gives the man a pitying look. “Ok, if you say so. But I’ve made seven nickels so far this month. How many dimes have you made?”
Which brings me to my real point—empirical research, I’m sure you have seen it, in which player 1 is asked to specify a split of $10 between himself and player 2. Player 2 then chooses to accept or reject. If he rejects, neither player gets anything. As I recall, when greedy player 1 specifies more than about 70% for himself, player 2 frequently rejects even though he is costing himself money. This can only be understood in classical “rational agent” game theory by postulating that player 2 does not believe researcher claims that the game is a one-shot.
What is the point? Well, perhaps people who have read about Newcomb problems are assuming (like most people in the research) that, somehow or other, greed will be punished.
Punishing unfair behavior even when it costs to do so is called altruistic punishment, and this particular experiment is called the Ultimatum Game.
Is it plausible that evolution would gradually push those 70% down to 30% or even lower, given enough time? There may not yet have been enough time for a strong enough group selection in evolution to create such an effect, but sooner or later it should happen, shouldn’t it? I’m thinking a species with such a great degree of selflessness would be more likely to survive than the present humanity is, because a larger percentage of them would cooperate about existential risk reduction than is the case in present humanity. Yet, 10-30% is still not 0%, so even with 10% there would still be enough of selfishness to make sure they wouldn’t end up refusing each other’s gifts until they all starve to death or something.
Can group selection of genes for different psychological constitution in humans already explain why player 1 takes only 70% and not, say, at least 90%, on average, in the game you describe?
What do chimps do? Does a chimp player 1 take more or less than 70%?
First of all, from the standpoint of the good of the group, I see no reason why player1 shouldn’t keep 100% of the money. After all, it is not as if player 2 were starving, and surely the good of player 1 is just as important to the good of the group as is the good of player 2. There is almost no reason for sharing from a standpoint of either Bentham-style utilitarianism or good-of-the-group.
However, there is a reason for sharing when you realize that player 2 is quite reasonably selfish, and has the power to make your life miserable. So, go ahead and give the jerk what he asks for. It is certainly to your own selfish advantage to do so. As long as he doesn’t get too greedy.
I’d like to see this done with a really good mentalist.
If I met someone in real life who was doing this trick (at least before I started spilling my opinions to the universe through my comments to this blog), I would strongly suspect that you were doing exactly this. And then I would definitely pick both boxes. (Well, first I’d try to figure out if you’re likely to offer me any more games, and I’d pick two boxes if I was fairly confident that you would not.) And I would get all of the money, since you would have predicted that I would pick only one box (assuming that you really seed them based on your honest best prediction).
On the other hand, if the situation is not presented as a game (even when I still don’t expect any iteration), I pretty consistently cooperate on all of the standard examples (prisoner’s dilemma, etc). But since feeling like a moral and cooperative person (except when playing games, of course) has high utility for me, I’m not really playing prisoner’s dilemma (etc) after all, so never mind.
This is interesting. I suspect this is a selection effect, but if it is true that there is a heavy bias in favor of one boxing among a more representative sample in the actual Newcomb’s problem, then a predictor that always predicts one boxing could be suprisingly accurate.
I read somewhere that about 70% of people one-box. You might be thinking of most philosophers or something like that.
Unilateraly cooperating in one-shot true PD’s is not right. If the other player’s decision is not correlated with yours, you should defect.
Thanks for a great post Adam, I’m looking forward to the rest of the series.
This might be missing the point, but I just can’t get past it. How does a rational agent come to believe that the being they’re facing is “an unquestionably honest, all knowing agent with perfect powers of prediction”?
I have the suspicion that a lot of the bizarreness of this problem comes out of transporting our agent into an epistemologically unattainable state.
Is there a way to phrase a problem of this type in a way that does not require such a state?
It’s not perfect, per se, but try this:
There’s a fellow named James Omega who (with the funding of certain powerful philosophy departments), travels around the country offering random individuals the chance to participate in Newcomb’s problem, with James as Omega. Rather than scanning your brain with his magic powers, he spends a day observing you in your daily life, and uses this info to make his decision. Here’s the catch: he’s done this 300 times, and never once mis-predicted. He’s gone up against philosophers and lay-people, people that knew they were being observed and people that didn’t, but it makes no difference: he just has an intuition that good. When it comes time to do the experiment, it’s set up in such a way that you can be totally sure (and other very prestigous parties have verified) that the amounts in the box do not change after your decision.
So when you’re selected, what do you do? Nothing quite supernatural is going on, we just have the James fellow with an amazing track record, and you with no particular reason to believe that you’ll be his first failure. Even if he is just human, isn’t it rational to assume the ridiculously likely thing (301/302 chance according to Laplace’s Law) that he’ll guess you correctly? Even if we adjust for the possibility of error, the payoff matrix is still so lopsided that it seems crazy to two-box.
See if that helps, and of course everyone else is free to offer improvements if I’ve missed something. You know, help get this Least Convenient Possible World going.
Now I want to read a series of stories starring James Omega in miscellaneous interesting situations. The kind of ability implied by accuracy at Newcomb’s Dilemma would seem to imply capability in other situations as well. (If nothing else, he would kill at rock-paper-scissors.)
Let’s make things clearer by asking the meta-question: is the predictor’s implementation, and the process by which we learn of it, relevant to the problem? Let’s unpack “relevant”: should the answer to Newcomb’s Problem depend on these extraneous details about the predictor? And let’s unpack “should”: if decision theory A tells you to one-box in approximately-Newcomb-like scenarios without requiring further information, and decision theory B says the problem is “underspecified” and the answer is “unstable” and you can’t pin it down without learning more about the real-world situation… which decision theory do you like more?
Decision theory A is by far preferable to me.
Of course, that’s assuming that by newcomb-like scenarios you only include those were one-boxing is actually statistically correlated with greater wealth once all other factors are canceled out.
If Decision Theory A’s definition of newcomb-like included a scenario where the person was doing well enough to make one-boxing appear to be the winning move, but was actually basing her decisions on hair-colour, then I would be more tempted by Decision Theory B.
IOW: whichever one wins for me :p
Newcomb’s Problem still holds in much more realistic situations. So say someone who knows you really, really well comes up to you and makes the same offer. Imagine you don’t mind taking their money and you reckon they know you well enough that they’re 80% likely to be correct in their bet. One boxing is still the right decision because you have the following gain from one boxing:
(.8 x 1 000 000) + (.2 x 0) = 800 000
and for two boxing:
(.8 x 1000) + (.2 x 1 001 000) = 800+ 200 200 = 201 000
But Causal Decision Theory will still undertake the same reasoning because your decision still doesn’t have a causal influence on whether the boxes are in state 1 or 2. So Causal Decision Theory will still two box.
So Newcomb’s Problem still holds in more realistic situations.
Is that the sort of thing you were looking for or have I missed the point?
Even if you don’t believe such a situation can exist, you can make inferences for how you should act in such a case, base on how you should act in realistic cases.
Like AdamBell said, you can consider a more realistic scenario where someone simply has a good chance of guessing what you do.
Then take it a step further: write your decision theory as a function of how accurate the guesser is. Presumably, for the “high but not 100%” accuracy cases, you’ll want to one-box. So, in order to have a decision theory that doesn’t have some sort of discontinuity, you will have to set it so that it would imply that on a 100% guesser-accuracy case, you should one-box as well.
In short, it’s another case of Belief in the Implied Invisible, or implied optimal, as is the case here. While you may not be in a position to test claim X directly, it falls out as an implication of the best theories, which are directly testable.
(I should probably write an article justifying the importance of Newcomb’s problem and why it has real implications for our lives—there are many other ways it’s important, such as in predicting the output of a process.)
If you want a way of phrasing this problem which involves the agent being in an attainable state, this may be of some small interest, Alexandros. A few years back I wrote an article discussing a situation with some similiarities with the one in Newcomb’s problem and with an attainable-state agent. While the article doesn’t prove anything really profound in philosophy, it might give a useful context. It is here: http://www.paul-almond.com/GameTheoryWithYourself.htm.
I believe you used to post here as PaulUK, and joined in for this discussion of your website’s articles.
SilasBarta, yes. I decided to change to this username as it is more obvious who I am. I generally use my real name in online discussions of this type: I have it on my website anyway. I don’t envisage using the PaulUK name again.
Others have given good answers; here’s another.
There is, and it is useful to look at such phrasings to allay those suspicions. However once we have looked at the issue enough to separate practical implications of imperfect knowledge from the core problem the simple version becomes more useful. It turns out that the trickiest part becomes unavoidable once we clear out the distractions!
And where, pray tell, might I look?
Asking folks to hypothetically accept the unbelievable does not, IMHO, “clear out distractions”.
When I was getting my head around the subject I made them up myself. I considered what the problem would look like if I took out the ‘absolute confidence’ stuff. For example—forget Omega, replace him with Patrick Jane. Say Jane has played this game 1,000 times before with other people and only got it wrong (and/or lied) 7 times.
I assume you can at least consider TV show entertainment level counterfactuals for the purpose of solving these problems. Analysing the behavior of fictional characters in TV shows is a legitimate use for decision theory.
That would have made things difficult in high school science. Most example problems do exactly that. I distinctly remember considering planes and pulleys that were frictionless.* The only difference here is that the problem is harder (on our intuitions, if nothing else.)
* Did anyone else find it amusing when asked to consider frictionless ropes that were clearly fastened to the 200 kg weights with knots?
Please link to previous discussions of Newcomb’s Problem on LW. They contain many valuable insights that new readers will otherwise have to regenerate (possibly poorly).
Okay. Doing so now.
Could you fix the spelling of Newcomb while you’re at it? Thanks!
And done.
A kind of funny way in which something like this might (just about) happen in reality occurs to me: Possible time delay in human awareness of decision making. Suppose when you make a conscious decision, your brain starts to become committed to that decision before you become aware of it, so if you suddenly decide to press a button then your brain was going through the process of committing you to pressing it before you actually knew you were going to press it. That would mean that every time you took a conscious decision to act, based on some rational grounds, you should really have been wanting to be the person who had been predisposed to act in that way a short time ago, when the neural machinery was pushing you towards that decision. I’m not saying this resolves any big issues, but maybe it can be amusingly uncomfortable for a few people—especially given some (admittedly controversial) experiments. In fact, with some brainwave monitoring equipment, a clever experiment design, and a very short experiment duration, you might even be able to set up something slightly resembling Newcomb’s paradox!
I have a description here of a practical demonstration of Newcomb’s paradox that might just be possible, with current or near-future technology. It would rely, simply, on the brain being more predictable over a short span of time. I would be interested to see what people think about the feasibility.
A test subject sits at a desk. On the desk are two buttons. On button “O” corresponds to opening one box. The other button “B” corresponds to opening both boxes. There is a computer, with a display screen. The boxes are going to be computer simulated: A program in the computer has a variable for the amount of money in the each box.
This is how an experimental run proceeds.
The subject sits at the desk for some random amount of time, during which nothing happens.
A “Decision Imminent” message appears on the computer screen. This is to warn the subject that his/her decision is about to be demanded imminently.
A short time after (maybe a second or two, or a few seconds), the computer program decides how much money will go in each box, and it sets the variables accordingly, without showing the user. As soon as that is done, a “Select a box NOW” message appears on the computer screen. The subject now has a (very) limited amount of time to press either button “O” or “B” to select one or both boxes. The subject will have to press one of the buttons almost immediately before the offer is withdrawn.
The subject is then shown the amount of money that was in each box.
Now, here is the catch (and everyone here will have guessed it).
The subject is also wired up to brain monitoring equipment, which is connected to the computer. When the “Decision imminent” message appeared, the computer started to examine the subject’s brainwaves, to try to see the decision to press being formed. Just before the “Select a box NOW” message appeared, it used this information to load the simulated boxes according to the rules of the Newcomb’s paradox game being discussed here.
I have no idea what level of accuracy could be achieved now, but it may be that some people could be made to have a worrying experience.
I’m considering continuing this sequence on an external blog. There’s been some positive responses to these posts but there are also a lot of people who plainly consider that the quality of the posts aren’t up to scratch. Moving them to an external site would let people follow them if they wanted to but would stop me from bombarding LW with another five or six posts.
Opinions?
I think this warrants being on Less Wrong. One of Eliezer’s best pieces was his basic explanation of Bayes Theorem, and there are plenty of people who’re confused about Decision Theory. This post got 111 comments, and it’s hard to see your doing worse than the recent SIAI-flamewar.
I think that you should finish this sequence on lesswrong.
It is less technical and easier to understand than other posts on Decision Theory, and would therefore be valuable for newcomers.
I don’t know—I’m not sure if we want to end up with dozens and dozens of post re-explaining things like Newcomb’s problem. Decision Theory was already explained here by Eliezer, then by Anna Salamon … maybe in a year some other new poster is going to read up on decision theory and decide to post a sequence about it on Less Wrong.
On the other hand, your Decision Theory posts aren’t really low-quality by LW standards. They’re just covering ground that has already been covered before. I would much prefer posts that quickly gloss over the familiar stuff (linking to the wiki or old sequences as needed), and quickly get to the new stuff.
I would like to direct this comment to the attention of all the people who wondered why I was apologetic about posting elementary material.
Does decision theory still matter in a world where there’s an agent who’s already predicted your choices? Once Omega exists, “decision” is the wrong word—it’s really a discovery mechanism for your actions.
That’s the normal meaning of “decision” anyway, unless you believe in acausal free will magic.
Decisions you make now are informative (in the information-theoretic sense) about your past.
Decisions you make now are informative about the past.
Eliezer has a post on an isomorphic topic:
Timeless Control
You might also like Gary Drescher’s treatment of choice in Good and Real.
I agree Dagon—and I actually specifically discussed this issue in the article I referenced in the comment I posted just before this one. Part of what I said was: “There may be one way that we could deal with this issue, and that would be to use different language to describe choices. Conventionally, if I have just picked up a glass we would say that I chose to pick it up. This whole idea of ‘choosing’ can cause us cognitive difficulties. Maybe it would be better to consider my ‘choice’ to pick up the glass as really ‘finding out’ that I was predisposed to pick it up.” I also agree with FAWS said—that this is implied by “decision” anyway—at least to anyone who thinks about it enough.
At first, I Thought It Meant that you’d add more links, but that’s a bad idea, and here’s an article on why.
The existence of an all-knowing agent with perfect powers of prediction makes a mockery of the very idea of causality, at least as I understand it. (I won’t go into details here, because it doesn’t really matter, as you’ll see.) Obviously causal decision theory doesn’t work if causality doesn’t make sense. However, since I assign negligible probability to the existence of such a being, I can still think that CDT is correct for practical purposes, while remembering that it can break down in extreme situations.
However, this doesn’t really matter for your point, which is (in part) based on this principle:
So if we alter the story to make it compatible with causality (as Spurlock did), then the answer is still that CDT does not lead to the greatest payoff.
However (and now I’m finally getting to my point), this doesn’t mean that CDT is incorrect! Although it is normally beneficial to know the truth, there are situations in which it is beneficial (and therefore rational, in a decision-theoretic sense) to believe falsehoods, and this may be one of them. (But the positivist in me wants to object that the correctness of CDT, as distinct from the usefulness of belief in it, is not a matter of observable fact and therefore meaningless.)
So I still want to say that I should pick two boxes. But now (now being after discussion of Eliezer’s post on the subject) I add that I also should be the type of person who would pick one box, and furthermore this is more important (at least when Newcomb’s Problem is the only relevant situation), even if being such a person would lead me to mistakenly pick one box in fact.
I wonder if it is possible to go one more step: instead of asking which decision theory to use (to make decisions), we should ask which meta-decision theory we should use (to choose decision theories). In that case, maybe we would find ourselves using EDT for Newcomb-like problems (and winning), but a simpler decision theory for some other problems, where EDT is not required to win.
I don’t know what a meta-decision theory would look like (I barely know what a decision theory looks like).
I think that this just gets rolled into your overall decision theory.
For instance, suppose we have two programs. We give all odd numbers to program 1 and it performs some action. We give all even numbers to program 2 and it performs some other action. On the surface, it looks like we’ve got 2 different programs and a meta level procedure for deciding which to use. But of course, it’s trivial to code this whole system up into a single program that takes an integer and does the correct thing with it.
My point being that I think it’s misleading to try and suggest two decision theories would be at work in your example. You’ve just got one big decision theory that does different stuff at different levels (which some decision theories already do anyway).
As many of us here secretly hope, the meta-decision theory must “reproduce itself” as the object-level decision theory. Just don’t ask me what this means formally.
That makes sense. It implies that we wouldn’t find ourselves using different object-level decision theories in different situations.
(But is it possible to construct a problem analogous to Newcomb’s on which EDT loses? If so it seems we would need different object-level DTs after all.)
As I wrote elsewhere in this thread, see the Newcomb’s variant with transparent boxes, or Parfit’s Hitchhiker.
The Smoking Lesion?
Causal Decision Theory isn’t fatally flawed in this case, it’s simply harder to properly apply.
A sufficiently advanced superintelligence could perfectly replicate you or I in a simulation. In fact, I can’t currently concieve of a more reliable method of prediction.
Which is where the explanation comes in for Causal Decision Theory. You may be the simulation, if you are the simulation then which box you take DOES affect what is in the boxes.
We could do a modified Newcomb’s Problem where the perfectly honest, all knowing Omega tells you that you’re not the simulation but the actual person and the simulation has already been done which seems to resolve that possibility discussed above. I don’t think you need to though because there’s no statement in Newcomb’s Problem that says that the predictions do occur via a simulation.
It reminds me of the trolley cart example in ethics where you’re told a train is rolling out of control down a hill and will run over 3 people. By hitting a switch you can change the track it goes down and it will instead hit 1 different person. Should you hit the switch?
The specific question isn’t relevant to what I’m trying to say but people’s responses are.
People will say things like, “Well, I’d just yell at the three people to get off the tracks.”
And then you have to specify that they’re too far away.
And the person will say, “Well, I’ll run toward them yelling so I get close enough in time.”
And you have to specify that they’re too far away for that as well.
The point is that the people that ask this question are missing the whole idea of the abstraction behind the trolley problem and they’re thinking of it as a lateral thinking test rather than a scenario used to make an intellectual point.
I feel that finding a way for CDT to answer Newcomb’s Problem via the specifics of the way Omega predicts your reactions is a similar response—trying to respecify the argument in such a way that an answer can be found rather than looking at the abstracted conception of the argument.
As always, I’m open to being shown that I’m wrong and missing something though.
Then the prediction has been based on a simulation that took place under different circumstances, since Omega (being perfectly honest) did not say this to the simulation.
But as others have said, this is beside the point. After reading all of these irrelevant objections and the irrelevant responses to them, I’m convinced that (at least when addressing people who understand decision theory up to the point of doing calculations with statistics) it’s better to phrase the question so that Omega is simply a clever human being who has achieved very high accuracy with very high correlation on a very large number of previous trials, instead of bringing perfection into it.
I’m thinking something like this:
30 cases where Omega predicts one-boxing but two-boxing takes place,
70 cases where Omega predicts two-boxing but one-boxing takes place,
270 cases where Omega predicts two-boxing and two-boxing takes place,
630 cases where Omega predicts one-boxing and one-boxing takes place.
Also, make the amounts $1 and $1000 so that utility will be very close to linear in amount of money (at least to middle-class First-Worlders like me).
Would you say the trolley car problem implies that the fat man has a strong obligation to throw himself under the train?
I’m not AdamBell, but I think that doesn’t follow. The fat man could value his own life higher than the lives of three strangers. But we have no reason to value his life higher too.
An All-knowing Omega by definition contains a simulation of this exact scenario. And in that simulation they aren’t being perfectly honest, but I still believe they are.
If Omega is in fact all-knowing, all possible scenarios exist in simulation within it’s infinite knowledge.
This is why throwing all-knowing entities into problems always buggers things up
Given the abstracted conception, prediction through simulation seems to be the most probable explanation. This results in CDT working.
It’s not starting from wanting CDT to work, it’s starting from examining the problem, working out the situation from the evidence, and then working out what CDT would say to do.
If I can’t apply reason when using CDT, CDT will fail when I’m presented with an “opportunity” to buy a magic rock that costs £10,000, and will make me win the lottery within a month.
Sigh.
You are missing the point.
Replace Omega with a genius Psychologist who only gets it right 99% of the time and CDT will have you walk off with $1000 while correct thinking leaves you with $1,000,000 almost all of the time, it’s just that in that scenario people will uselessly argue that the 1% chance to get lucky somehow makes it rational.
How is the genius psychologist likely to be predicting your actions?
To me, it seems probable that he’s simulating you, imperfectly, within his own mind.
How would you explain his methodology?
EDIT: to clarify my reasoning, I simulate people, myself included, often. Generally when I want to predict their actions. I’m not very good at it. Were I a genius psychologist, and hence obviously great at simulating people, I don’t see why I would be any less likely to simulate people.
She doesn’t tell you in the scenario.
Maybe she had her grad students talk with you on various subjects and subject you to various stealth psychological experiments over the last 10 years and watched it all on video, all based on your signing an agreement to take part in a psychological experiment that didn’t specify a duration 15 years ago that was followed by a dummy experiment and that you promptly forgot about.
Maybe she is secretly your mother.
Maybe she is just that good and tell it by the way you shaked her hand.
In any case 99% shouldn’t require imagining the actions of a reflectively indistinguishable from you copy of you.
Those are all ways of her having gathered the evidence.
From the evidence, how has she reached the conclusion?
The most plausible scenario for getting from evidence to conclusion is mental simulation as far as I can tell.
You haven’t even proposed a single alternative yet
EDIT: (did you edit this in, or did I miss it?)
You expect the copy to be able to tell it’s a copy? Why? Why would the psychologist simulate it discovering that it is the copy? When you simulate someone’s reaction to possible courses of action, do you simulate them as being aware of being a simulation?
None of my internal simulations have ever been aware of being simulations.
There are four possibilities:
The copy never wonders whether it’s a copy.
The copy wonders about being a copy and concludes that it is.
The copy concludes that it cannot be a copy.
The copy is from it’s point of view reflectively indistinguishable from you.
Only in case 4. will you seriously have to wonder whether you are a copy. In case 1. you will know that you are not as soon as you consider the possibility, case 2. is irrelevant unless you also assume that the real you will also conclude that it’s a copy, which is logically inconsistent.
Nevertheless case 1. should be sufficient for predicting the actions you take once you conclude that you are not a copy to a reasonable accuracy.
Case 1 is sufficient to predict my actions IFF I would never wonder about whether I was a copy.
Given that I would in fact wonder whether I was a copy, and that that thought-process is significant to the scenario, Case 1 seems likely to be woefully inadequate for simulating me.
Case 4 is therefore much more plausible for a genius psychologist (with 99% accuracy) from my PoV.
The psychologist tells you that she simply isn’t capable of case 4 (there are all sorts of at least somewhat verifiable facts that you would expect yourself to know and that she doesn’t [e. g. details about your job that have to make sense and be consistent with a whole web of other details, that she couldn’t plausibly have spied out or invented a convincing equivalent thereof herself]). Given that you just wondered you can’t be a simulation. What do you do?
I know she’s lying.
Case 4 just requires that the simulation not recognise that it is a simulation when it considers whether or not it’s a simulation, ie. that whatever question it asks itself, it finds an answer. It can’t actually check for consistency, remember, it’s a simulation, if it would find an inconsistency “change detail [removing inconsistency], run” or “insert thought ‘yep, that’s all consistent’; run”
If she’s capable of case 1, she’s capable of case 4, even if she has to insert the memory on it being requested, rather than prior to request.
The stealth psychological experiments could have included an isomorphic problem, or she could be using a more sophisticated version of
New ager: one box
Thinks time travel conflicts with free will: two box
uses EDT: one box
TDT/UDT; one box
bog standard CDT: two box
CDT, but takes simulation hypothesis seriously: one box if thinking it possible that in a simulation, two box otherwise.
Stealth psychological experiments you forgot about allowed her to determine necessary and/or sufficient conditions for you assuming that you might be in a simulation that you yourself are unaware of, and she set the whole thing up in a such a way that she can tell with high confidence whether you do.
The categorisation possibility is reasonable. Personally I would estimate the probability of 99% accuracy achieved through categorisation lower than the probability of 99% accuracy achieved through mental simulation, but it’s certainly a competitive hypothesis.
Assuming she tells you that she predicted your actions through some unspecified mechanism other than imagining your thought process in sufficient detail for the imagined version to ask itself whether it just exists in her imagination, what do you do?
I question what reason I have to assume she’s being honest, and is in fact correct.
Given her psychological genius she is likely correct about the methods she used, although not certainly (she may not be good at self-analysis).
If I conclude that: either A) she is being honest or B) the whole pay-off is a lie Then I will probably act on the second most plausible (to my mind) scenario. I’ve yet to work out what that is. Repeating the experiment often enough to get statistics that are precise enough for 99% accuracy would be extremely costly with the standard pay-out scheme; so while I jumped towards that as my secondary scenario it’s actually very implausible.
Reduce both payoffs by a factor of 100.
The psychologist is hooked up to a revolutionary lie detector that is 99% reliable, there is the standing price of $ 1,000,000,000 for anyone who can after calibration deceive it on more than 10 out of 50 statements (with no further calibration during the trial). The psychologist is known to have tried the test three times and failed (with 1, 4, and 3 successful deceptions).
Well, the psychologist’s track record of successful lying is within a plausible range of the 99% reliability.
With the payoffs decreased by a factor of 100, and the lie detector added in, my best guess would be that she’s repeated the experiment often, and gathered up a statistical model of people to which she can compare me, and to which I will be added. In such a circumstance I think I would still tend to one-box, but the reason is slightly different.
I value the wellbeing of people who are like me. If I one-box, others like me will be more likely to receive the $10,000; rather than just the $10
Are you sure you are actually trying to make a valid defense of CDT and not just looking for excuses?
What would you do if that somehow were not a consideration? (What would you do if you were more selfish, what would an otherwise identical more selfish simulation of you do, what would you do if you could be reasonably sure that you won’t affect the payoff for anyone else you would care about for some reason that doesn’t change your estimation of the accuracy of the prediction and the way it came about [e. g. you are the last subject and everyone before you for whom it would matter was asked what they would have done if they had been the last subject]?)
Are you sure you’re not just trying to destroy CDT rather than think rationally? If you think I am being irrationally defensive of CDT, check the OTHER thread off my first reply. You seem to be trying very hard indeed to tear down CDT.
CDT gives the correct result in the original posted scenario, for reasons which are not immediately obvious but are none-the-less present. You appear to have accepted that, what with your gradually moving further and further from the original scenario.
In your scenario, designed specifically to make CDT not work, it would still work for me, because of who I am.
If I was more selfish, I don’t see CDT working in your scenario. If there is a reason why it should work, I haven’t realised it. But then, it’s a scenario contrived with the specific intention of CDT not working.
Your “everyone was the last subject” scenario breaks down somewhat; if everyone is told they are the last subject then I can’t take being told that I’m the last subject seriously. If I AM the last subject, I will be extremely skeptical, given the sample-size I expect to be needed for the 99% accuracy, and thus I will tend to behave as though I am not the last subject due to not believing I am the last subject.
My original point was simply that the starting post, while claiming to show problems with CDT, failed. It used a scenario that didn’t illustrate any problem with CDT. Do you still disagree with my original point?
EDIT: You seem to think that I’m doing my best to defend CDT. I’m really not, I have no major vested interest in defending CDT except when it was unfairly attacked. Adambell has posted two scenarios where CDT works fine, with claims that CDT doesn’t work in those scenarios.
Almost everyone agrees that CDT two-boxes in the original scenario, both proponents and opponents of CDT. The only way to make CDT “work” are excuses that are completely irrelevant to the original point of the scenario and amount to deliberately understand the scenario as different than intended. This discussion thread has shown that the existence of such excuses is not implied by the structure of the problem, so any issues with a particular formulation are irrelevant. It’s sort of like arguing that EDT is right in the smoke lesion problem because any evidence that smoking and cancer are caused by lesions rather than cancer by smoking would be dubious and avoiding smoking just to be sure would be prudent.
So because I disagree with your consensus, my rational objection must be wrong?
I didn’t change the scenario. I looked at the scenario, and asked what someone applying CDT rationally, who understood that it’s impossible to tell whether you’re being simulated or not, would do. And, as it happened, I got the answer “they would one-box, because they’re probably a simulation”.
If I posted a scenario where an EDT person would choose to walk through a minefield, because they’ve never seen anyone walk through a minefield and thus don’t consider walking through a minefield to be evidence that they won’t live much longer, would you not think my scenario-crafting skills were a bit weak?
Not wrong, beside the point. Objections like that don’t touch the core of the problem at all. Finding clever ways for decision theory differences in example cases not to matter doesn’t change the validity of the decision theories.
Your mine field example is different in that the original formulation of Newcomb’s problem gets the point across for almost everyone while I’m not sure what the point in the mine field example would be. That EDT would be even stupider than it already is if it restricted what kinds of evidence could be considered? Well, yes, of course. I won’t defend EDT, it’s wronger than CDT (though at least a bit better defined).
CDT is seemingly imperfect. I have acknowledged such.
But pointing to CDT as failing when it doesn’t fail doesn’t help. Pointing to where it DOES fail helps.
When I see someone getting the right answer for the wrong reason I criticise their reasoning.
The point you should take away from newcomb’s paradox isn’t that CDT fails (in some formulations it seems to, in others it’s just hard to apply) it’s that CDT is really hard to apply, so using something that gets the right answer easily is better.
Newcomb’s problem tries to show that CDT only caring about things caused by your decisions afterwards can be a weakness by providing an example where things caused by accurate predictions of your decisions outweight those things. Everything else is just window dressing. You are using the window dressing to explain how you care about these other things caused by the decision, so you coincidentally act just as if you also cared about the causes of accurate predictions of your decisions. But as long as you make out the things caused by the decision that should, according to the intention of the problem statement, cause the less desirable things afterwards actually cause more desirable things afterwards you are not addressing Newcomb’s problem. You are just showing that what is a particular formulation of Newcomb’s problem for most people isn’t a formulation of Newcomb’s problem for you. In a way that doesn’t generalize.
The “accurate prediction” is a central part of Newcomb’s problem. The issue of whether it’s possible (I feel it is) and IN WHAT WAYS it is possible, are central to the validity of Newcomb’s problem.
If all possible ways of the accurate prediction were to make CDT work, then Newcomb’s problem wouldn’t be a problem for CDT. (apart from the practical one of it being hard to apply correctly)
At present, it seems like there are possible ways that make CDT work, and possible ways that make CDT not work. If it were to someday be proved that all possible ways make CDT work, that would be a major proof. If it were to be proved (beyond all doubt) that a possible way was completely incompatible with CDT, that could also be important for AI creation.
I suggest that the way you use ‘CDT’ is actually a hop and a jump in the direction of TDT. When you already have a box containing $1,000,000 in your hand you are looking at a $10,000 sitting on the table and deciding not to take it. Even though you know that nothing you do now has any way of causing the money you already have to disappear. Pure CDT agents just don’t do that.
If you don’t know whether you’re a simulation or not, you don’t know whether or not your taking the second box will cause the real-world money not to be there. And, as a simulation, you probably won’t get to spend any of that sim-world money you’ve got there.
To be fair, I don’t particularly use CDT consciously, because it seems to be flawed somehow (or at least, harder to use than intuition, and I’m lazy). But I came across newcomb’s paradox, thought about it, and realised that in the traditional formulation I’m probably a simulation.
I don’t see why realising I’m probably a simulation is something a CDT agent can’t do?
Replace ‘Omega’ with Patrick Jane. No sims. What do you do?
A) I one-box. I will one-box in most reasonable scenarios.
B)How do you predict other people’s actions?
Personally, I mentally simulate them. Not particularly well, mind, but I do mentally simulate them. Am I unusual in this?
I’ve never watched the Mentalist, but if Patrick Jane is sufficiently good to get a 99% success rate, I’m guessing his simulations are pretty damn good.
Patrick Jane is a fictional character in the TV show The Mentalist. He’s a former (fake) psychic who now uses his cold reading skills to fight crime.
Cheers, had been looking that up, oddly my edit to my post didn’t seem to update it.
No, he doesn’t (necessarily). He could prove the inevitable outcome based of elements of the known state of your brain without ever simulating anything. If you read reduction of could you will find a somewhat similar distinction that may make things clearer.
… So we can’t conclude this.
This suggests you don’t really understand the problem (or perhaps CDT). That is not the same kind of reasoning.
Does he not know the answer to “what will happen after this” with regards to every point in the scenario?
If he doesn’t, is he all-knowing?
If he does know the answer at every point, in what way doesn’t he contain the entire scenario?
EDIT: A non-all-knowing superintelligence could presumably find ways other than simulation of getting my answer, as I said simulation just strikes me as the most probable. If you think I should update my probability estimate of the other methods, that’s a perfectly reasonable objection to my logic re: a non-all-knowing superint.
Certainly. That is what I consider Omega doing when I think about these problems. It is a useful intuition pump, something we can get our head around.
The prediction method doesn’t have to be very good. A predictor that’s only slightly better than chance is quite enough to put EDT and CDT into conflict. For example, I could achieve better than 50% accuracy on LW participants by just reading through their comment history and seeing what they think about Newcomb’s Problem.
Indeed. A 55% accuracy is plenty to make this an issue. And at present, CDT seems to me to fail on the 55% accuracy problem; whereas EDT clearly works.
It’s easy to construct Newcomb-like problems where EDT fails. For example, we could make the two boxes transparent, so you already see their contents and your action gives you no further evidence. One-boxing is still the right decision because that’s what you’d like to be predicted by Omega (alternatively: if you could modify your brain before meeting Omega, that’s what you’d precommit to doing), but both EDT and CDT fail to see that. Another similar example is Parfit’s Hitchhiker.
CDT still works in that case if you’re dealing with omega, and have no reaason to believe Omega won’t simulate you. If you are one of the simulations, you decide the prediction for the real version
How about if you’re dealing with me?
Then CDT seems to fail, with it being a low-% case (perhaps 55% as I used above) and EDT fails due to the prize already being in evidence
Typo here.
I would also like to see subheadings for “causal says” and “evidential says”, probably changing “Decision theory and Newcomb’s problem” just to make it neat. That would make the flow of the text readable at a glance.
Since you are making posts that would be intended to be linked to it is worth spending extra time getting the details right.
I’m wearing out the d-o-n and e keys on my keyboard. Thanks for the comments. Doing another proofread now in light of the number of errors so hopefully that counts as “spending extra time”.
I appreciate your work. I love having posts to link to—saves a lot of time in the long run.
Thanks.
And I will keep it in mind with future posts that if I’m writing something to be linked to, it’s worth making the outline clear and making as few mistakes as possible.
Your link in the appendix goes to the wrong place. Presumably you meant this: http://plato.stanford.edu/entries/decision-causal/
Indeed I do. Can’t explain what happened in my brain there. Fixing it now.
Newcomb’s problem proves EDT only by cheating.
Before It presents you with the problem, Omega tests whether you subscribe to CDT or EDT, and puts the million in the box iff you subscribe to EDT. So you’ll get more if you subscribe to EDT. So you’ll be better off applying heuristics that you’re arbitrarily rewarded for, but this doesn’t say anything about normal situations (like kissing the sick baby.)
The standard reply to your objection is that Newcomb’s Problem doesn’t actually care about the “ritual of cognition” that you happen to use. It only cares about your answer. You could one-box because you worship Cthulhu, instead of EDT, and still win. For example, I don’t subscribe to EDT, but still one-box because I find UDT’s solution convincing :-)
Newcomb’s problem is a poor vehicle for illustrating points about rationality. It is a minefield of misconceptions and unstated assumptions. In general the one boxers are as wrong as the two boxers. When Omega is not infallible the winning strategy depends on how Omega arrives at the prediction. If that information is not assumed or somehow deducible then the winning strategy is impossible to determine.
Your point about casual decision theory being flawed in some circumstances may be correct but using Newcomb’s problem to illustrate it detracts from the argument.
Consider a condensed analogy. Someone will roll a standard six sided die. You can bet on six or not-six to come up. Both bets double your money if you win. Assume betting on six wins. Since six wins any decision theory that has you betting not-six is flawed.
Any conclusions, about how things work in the real world, drawn from Newcomb’s problem, crucially rest on the assumption that an all-knowing being might, at least theoretically, as a logically consistent concept, exist. If this crucial assumption is flawed, then any conclusions drawn from Newcomb’s problem are likely flawed too.
To be all-knowing, you’d have to know everything about everything, including everything about yourself. To contain all that knowledge, you’d have to be larger than it—otherwise there would be no matter or energy left to perform the activity of knowing it all. So, in order to be all-knowing, you’d have to be larger than yourself. Which is theoretically impossible. So, the Newcomb problem crucially rests on a faulty assumption: that something that is theoretically impossible might be theoretically possible.
So, conclusions drawn from Newcomb’s problem are no more valid than conclusions drawn from any other fairy tale. They are no more valid than, for example, the reasoning: “if an omnipotent and omniscient God would exist who would eventually reward all good humans with eternal bliss, all good humans would eventually be rewarded with eternal bliss → all good humans will eventually be rewarded with eternal bliss whether the existence of an omnipotent and omniscient God is even theoretically possible or not”.
One might think that Newcomb’s problem could be altered; one might think that instead of an “all-knowing being” it could assume the existence a non-all-knowing being that however knows what you will choose. But if the MWI is correct, or if the universe is otherwise infinitely large, not all of the infinitely many identical copies of you would be controlled by any such being. If they would, that would mean that that being would have to be all-knowing. Which, as shown, is not possible.
I disagree with that. The being in Newcomb’s problem wouldn’t have to be all-knowing. He would just have to know what everyone else is going to do conditional on his own actions. This would mean that any act of prediction would also cause the being to be faced with a choice about the outcome.
For example:
Suppose I am all-knowing, with the exception that I do not have full knowledge about myself. I am about to make a prediction, and then have a conversation with you, and then I am going to sit in a locked metal box for an hour. (Theoretically, you could argue that even then I would affect the outside world, but it will take time for chaos to become an issue, and I can factor that in.) You are about to go driving.
I predict that if I tell you that you will have a car accident in half an hour, you will drive carefully and will not have a car accident.
I also predict that if I do not tell you that you will have a car accident in half an hour, you will drive as usual and you will have a car accident.
I lack full self-knowledge. I cannot predict whether I will tell you until I actually decide to tell you.
I decide not to tell you. I get in my metal box and wait. I know that you will have a car accident in half an hour.
My lack of complete self-knowledge merely means that I do not do pure prediction: Instead any prediction I make is conditional on my own actions and therefore I get to choose which of a number of predictions comes true. (In reality, of course, the idea that I really had a “choice” in any free will sense is debatable, but my experience will be like that.)
It would be the same for Newcomb’s boxes. Now, you could argue that a paradox could be caused if the link between predictions and required actions would force Omega to break the rules of the game. For example, if Omega predicts that if he puts the money in both boxes, you will open both boxes, then clearly Omega can’t follow the rules. However, this would require some kind of causal link between Omega’s actions and the other players. There could be such a causal link. For example, while Omega is putting the money in the boxes, he may disturb weather patterns with his hands, and due to chaos theory make it rain on the other player on his way to play game, causing him to open both boxes. However, it should seem reasonable that Omega could manage his actions accordingly to control this: He may have to move his hands a particular way, or he may need to ensure that the game is played very soon after the boxes are loaded.
Hereinafter, “to Know x” means “to be objectively right about x, and to be subjectively 100 percent certain of x, and to have let the former ‘completely scientifically cause’ the latter (i.e. to have used the former to create the latter in a completely scientific manner), such that it cannot, even theoretically, be the case that something other than the former coincidentally and crucially misleadingly caused the latter—and to Know that all these criteria are met”.
Anything that I merely know (“know” being defined as people usually seem to implicitly define it in their use of it), as opposed to Know, may turn out to be wrong (for all that I know). It seems that the more our scientists know, the more they realize that they don’t know. Perhaps this “rule” holds forever, for every advancing civilisation (with negligible exceptions)? I think there could not even theoretically be any Knowing in the (or any) world. I conjecture that, much like it’s universally theoretically impossible to find a unique integer for every unique real, it’s universally theoretically impossible for any being to Know anything at all, such as for example what box(es) a human being will take.
Nick Bostrom’s Simulation Argument seems to show that any conceivable being that could theoretically exist might very well (for all he (that being) knows) be living in a computer simulation controlled by a mightier being than himself. This universal uncertainty means that no being could Know that he has perfect powers of prediction over anything whatsoever. Making a “correct prediction” partly due to luck isn’t having perfect powers of prediction, and a being who doesn’t Know what he is doing cannot predict anything correctly without at least some luck (because without luck, Murphy’s law holds). This means that no being could have perfect powers of prediction.
Now let “Omeg” be defined as the closest (in terms of knowledge of the world) to an all Knowing being (Omega) that could theoretically exist. Let A be defined as the part(s) of an Omeg that are fully known by the Omeg itself, and let B be defined as whatever else there may be in an Omeg. I suggest that in no Omeg of at least the size of the Milky Way can the B part be too small to secretly contain mechanisms that could be stealthily keeping the Omeg arbitrarily ignorant by having it falsely perceive arbitrarily much of its own wildest thought experiments (or whatever other unready thoughts it sometimes produces) to be knowledge (or even Knowledge). I therefore suggest that B, in any Omeg, could be keeping its Omeg under the impression that the A part is sufficient for correct prediction of, say, my choice of boxes, while in reality it isn’t. Conclusion: no theoretically possible being could perfectly predict any other being’s choice of boxes.
You may doubt it, but you can’t exclude the possibility. This means you also can’t exclude the possibility that whatever implications Newcomb’s problem seems to produce that wouldn’t occur to people if Omega were replaced by, say, a human psychologist, are implications that occur to people only because the assumption, that there could be such a thing as a perfect predictor of something/anything, is an assumption too unreasonable to be worthy of acceptance, as its crucial underpinnings don’t make sense (like it doesn’t make sense to assume that there is an integer for every real) - and as it can, because of this, be expected to produce arbitrarily misleading conclusions (about decision theory in this case) - much like many seemingly reasonable but heavily biased extreme thought experiments designed to smear utilitarianism scare even very skilled thinkers into drawing false conclusions about utilitarianism.
Or suppose someone goes to space, experiences weightlessness, thinks: “hey, why doesn’t my spaceship seem to exert any gravity on me?” and draws the conclusion: “it’s not gravity that keeps people down on Earth; it’s just that the Earth sucks”. Like that conclusion would be flawed, the conclusion that Newcomb’s problem shows that we should replace Causal Decision Theory with Evidential Decision Theory is flawed.
So, to be as faithful to the original Newcomb thought-experiment as is possible within reason, I’d interpret it in the way that just barely rids its premises of theoretical impossibility: I’d take Omega to mean Omeg, as defined above. An Omeg is fallible, but probably most of the time better than me at predicting my behavior, so I should definitely one-box, for the same reason that I should one-box if the predictor were a mere human being who just knew me very well. To risk a million dollar just to possibly get another 1000 dollar just isn’t worth it. Causal Decision Theory leads me to this conclusion just fine.
*) You might think B would be “the real” (or “another, smarter”) Omeg, by controlling A. But neither B nor A can rationally completely exclude the possibility that the other one of them is in secret control of both of them. So no one of them can have “perfect powers of prediction” over any being whatsoever.
I know nothing! Nothing!
My previous post resulted in 0 points, despite being very thoroughly thought-through. A comment on it, consisting of the four words “I know nothing! Nothing!” resulted in 4 points. If someone could please explain this, I’d be a grateful Goo.
That is unfortunate. You deserve a better explanation.
I believe a lot of the posters here (because they’re about as good as me at correct reasoning) did not read much of your exposition because toward the beginning, you posited a circumstance in which someone has 100% certainty of something. But this breaks all good epistemic models. One of the humans here provided a thorough explanation of why in the article 0 and 1 are not probabilities.
That, I believe, is why User:wedrifid found it insightful (as did 4 others) to say that User:wedrifid knows nothing, as per your standard, User:wedrifid knows nothing, since that User (like me and most others here) do not use 100% for any probability in our models.
Also, why do you call yourself “goo”? Wouldn’t you rather be something stronger?
If you introduce yourself in the introduction thread, perhaps explaining your name, you can gain some Karma. Currently, you seem to be below zero, which introduces waiting periods between comments. I had that problem when I first posted here, but you can overcome it!
I don’t know why your post got 0 points and no replies. But one of the reasons may be that it is hard to extract what the central point or conclusion you are trying to make is.
My comment gleaned 4 karma by taking the definition you introduce in the first sentence and tracing the implications using the reasoning Clippy mentions. This leads to the conclusion that I am literally in the epistemic state that is used in a hyperbolic sense by the character Shcultz from Hogans Heroes. While humour itself is hard to describe things that are surprising and include a contrast between distant concepts tend to qualify.
(By the way, the member Clippy is roleplaying an early iteration of an artificial intelligence with the goal of maximising paperclips—an example used to reference a broad group of unfriendly AIs that could be plausibly created by well meaning but idiotic programmers.)
I’m not role-playing, ape.
In general, the voting system doesn’t reward thought through, nor large wads of text. It rewards small things that can be easily digested and seem insightful, no more than one or maybe two inferential steps from the median voter. Nitpicking and jokes are both easily judged.
The opposite is true: large wads of text can be turned into top-level posts, which get tenfold karma.
...Or, perhaps more correctly put, such a being (a non-all-knowing being who, however, “knows what you will do”) could not know for sure that he knows what all of the copies of you will do—because in order to know that, he would have be all-knowing—and so any statement to the effect that “he knows what you will do” is a highly questionable statement.
Just like a being who doesn’t know that he is all-knowing cannot reasonably be said to be all-knowing, a being who doesn’t know that he knows what all of the copies of you will do (because he doesn’t know how many copies of you there exist outside of the parts of the universe he has knowledge of) cannot reasonably be said to know what all of the copies of you will do.