Any conclusions, about how things work in the real world, drawn from Newcomb’s problem, crucially rest on the assumption that an all-knowing being might, at least theoretically, as a logically consistent concept, exist. If this crucial assumption is flawed, then any conclusions drawn from Newcomb’s problem are likely flawed too.
To be all-knowing, you’d have to know everything about everything, including everything about yourself. To contain all that knowledge, you’d have to be larger than it—otherwise there would be no matter or energy left to perform the activity of knowing it all. So, in order to be all-knowing, you’d have to be larger than yourself. Which is theoretically impossible. So, the Newcomb problem crucially rests on a faulty assumption: that something that is theoretically impossible might be theoretically possible.
So, conclusions drawn from Newcomb’s problem are no more valid than conclusions drawn from any other fairy tale. They are no more valid than, for example, the reasoning: “if an omnipotent and omniscient God would exist who would eventually reward all good humans with eternal bliss, all good humans would eventually be rewarded with eternal bliss → all good humans will eventually be rewarded with eternal bliss whether the existence of an omnipotent and omniscient God is even theoretically possible or not”.
One might think that Newcomb’s problem could be altered; one might think that instead of an “all-knowing being” it could assume the existence a non-all-knowing being that however knows what you will choose. But if the MWI is correct, or if the universe is otherwise infinitely large, not all of the infinitely many identical copies of you would be controlled by any such being. If they would, that would mean that that being would have to be all-knowing. Which, as shown, is not possible.
I disagree with that. The being in Newcomb’s problem wouldn’t have to be all-knowing. He would just have to know what everyone else is going to do conditional on his own actions. This would mean that any act of prediction would also cause the being to be faced with a choice about the outcome.
For example:
Suppose I am all-knowing, with the exception that I do not have full knowledge about myself. I am about to make a prediction, and then have a conversation with you, and then I am going to sit in a locked metal box for an hour. (Theoretically, you could argue that even then I would affect the outside world, but it will take time for chaos to become an issue, and I can factor that in.) You are about to go driving.
I predict that if I tell you that you will have a car accident in half an hour, you will drive carefully and will not have a car accident.
I also predict that if I do not tell you that you will have a car accident in half an hour, you will drive as usual and you will have a car accident.
I lack full self-knowledge. I cannot predict whether I will tell you until I actually decide to tell you.
I decide not to tell you. I get in my metal box and wait. I know that you will have a car accident in half an hour.
My lack of complete self-knowledge merely means that I do not do pure prediction: Instead any prediction I make is conditional on my own actions and therefore I get to choose which of a number of predictions comes true. (In reality, of course, the idea that I really had a “choice” in any free will sense is debatable, but my experience will be like that.)
It would be the same for Newcomb’s boxes. Now, you could argue that a paradox could be caused if the link between predictions and required actions would force Omega to break the rules of the game. For example, if Omega predicts that if he puts the money in both boxes, you will open both boxes, then clearly Omega can’t follow the rules. However, this would require some kind of causal link between Omega’s actions and the other players. There could be such a causal link. For example, while Omega is putting the money in the boxes, he may disturb weather patterns with his hands, and due to chaos theory make it rain on the other player on his way to play game, causing him to open both boxes. However, it should seem reasonable that Omega could manage his actions accordingly to control this: He may have to move his hands a particular way, or he may need to ensure that the game is played very soon after the boxes are loaded.
Hereinafter, “to Know x” means “to be objectively right about x, and to be subjectively 100 percent certain of x, and to have let the former ‘completely scientifically cause’ the latter (i.e. to have used the former to create the latter in a completely scientific manner), such that it cannot, even theoretically, be the case that something other than the former coincidentally and crucially misleadingly caused the latter—and to Know that all these criteria are met”.
Anything that I merely know (“know” being defined as people usually seem to implicitly define it in their use of it), as opposed to Know, may turn out to be wrong (for all that I know). It seems that the more our scientists know, the more they realize that they don’t know. Perhaps this “rule” holds forever, for every advancing civilisation (with negligible exceptions)? I think there could not even theoretically be any Knowing in the (or any) world. I conjecture that, much like it’s universally theoretically impossible to find a unique integer for every unique real, it’s universally theoretically impossible for any being to Know anything at all, such as for example what box(es) a human being will take.
Nick Bostrom’s Simulation Argument seems to show that any conceivable being that could theoretically exist might very well (for all he (that being) knows) be living in a computer simulation controlled by a mightier being than himself. This universal uncertainty means that no being could Know that he has perfect powers of prediction over anything whatsoever. Making a “correct prediction” partly due to luck isn’t having perfect powers of prediction, and a being who doesn’t Know what he is doing cannot predict anything correctly without at least some luck (because without luck, Murphy’s law holds). This means that no being could have perfect powers of prediction.
Now let “Omeg” be defined as the closest (in terms of knowledge of the world) to an all Knowing being (Omega) that could theoretically exist. Let A be defined as the part(s) of an Omeg that are fully known by the Omeg itself, and let B be defined as whatever else there may be in an Omeg. I suggest that in no Omeg of at least the size of the Milky Way can the B part be too small to secretly contain mechanisms that could be stealthily keeping the Omeg arbitrarily ignorant by having it falsely perceive arbitrarily much of its own wildest thought experiments (or whatever other unready thoughts it sometimes produces) to be knowledge (or even Knowledge). I therefore suggest that B, in any Omeg, could be keeping its Omeg under the impression that the A part is sufficient for correct prediction of, say, my choice of boxes, while in reality it isn’t. Conclusion: no theoretically possible being could perfectly predict any other being’s choice of boxes.
You may doubt it, but you can’t exclude the possibility. This means you also can’t exclude the possibility that whatever implications Newcomb’s problem seems to produce that wouldn’t occur to people if Omega were replaced by, say, a human psychologist, are implications that occur to people only because the assumption, that there could be such a thing as a perfect predictor of something/anything, is an assumption too unreasonable to be worthy of acceptance, as its crucial underpinnings don’t make sense (like it doesn’t make sense to assume that there is an integer for every real) - and as it can, because of this, be expected to produce arbitrarily misleading conclusions (about decision theory in this case) - much like many seemingly reasonable but heavily biased extreme thought experiments designed to smear utilitarianism scare even very skilled thinkers into drawing false conclusions about utilitarianism.
Or suppose someone goes to space, experiences weightlessness, thinks: “hey, why doesn’t my spaceship seem to exert any gravity on me?” and draws the conclusion: “it’s not gravity that keeps people down on Earth; it’s just that the Earth sucks”. Like that conclusion would be flawed, the conclusion that Newcomb’s problem shows that we should replace Causal Decision Theory with Evidential Decision Theory is flawed.
So, to be as faithful to the original Newcomb thought-experiment as is possible within reason, I’d interpret it in the way that just barely rids its premises of theoretical impossibility: I’d take Omega to mean Omeg, as defined above. An Omeg is fallible, but probably most of the time better than me at predicting my behavior, so I should definitely one-box, for the same reason that I should one-box if the predictor were a mere human being who just knew me very well. To risk a million dollar just to possibly get another 1000 dollar just isn’t worth it. Causal Decision Theory leads me to this conclusion just fine.
*) You might think B would be “the real” (or “another, smarter”) Omeg, by controlling A. But neither B nor A can rationally completely exclude the possibility that the other one of them is in secret control of both of them. So no one of them can have “perfect powers of prediction” over any being whatsoever.
My previous post resulted in 0 points, despite being very thoroughly thought-through. A comment on it, consisting of the four words “I know nothing! Nothing!” resulted in 4 points. If someone could please explain this, I’d be a grateful Goo.
That is unfortunate. You deserve a better explanation.
I believe a lot of the posters here (because they’re about as good as me at correct reasoning) did not read much of your exposition because toward the beginning, you posited a circumstance in which someone has 100% certainty of something. But this breaks all good epistemic models. One of the humans here provided a thorough explanation of why in the article 0 and 1 are not probabilities.
That, I believe, is why User:wedrifid found it insightful (as did 4 others) to say that User:wedrifid knows nothing, as per your standard, User:wedrifid knows nothing, since that User (like me and most others here) do not use 100% for any probability in our models.
Also, why do you call yourself “goo”? Wouldn’t you rather be something stronger?
If you introduce yourself in the introduction thread, perhaps explaining your name, you can gain some Karma. Currently, you seem to be below zero, which introduces waiting periods between comments. I had that problem when I first posted here, but you can overcome it!
I don’t know why your post got 0 points and no replies. But one of the reasons may be that it is hard to extract what the central point or conclusion you are trying to make is.
My comment gleaned 4 karma by taking the definition you introduce in the first sentence and tracing the implications using the reasoning Clippy mentions. This leads to the conclusion that I am literally in the epistemic state that is used in a hyperbolic sense by the character Shcultz from Hogans Heroes. While humour itself is hard to describe things that are surprising and include a contrast between distant concepts tend to qualify.
(By the way, the member Clippy is roleplaying an early iteration of an artificial intelligence with the goal of maximising paperclips—an example used to reference a broad group of unfriendly AIs that could be plausibly created by well meaning but idiotic programmers.)
In general, the voting system doesn’t reward thought through, nor large wads of text. It rewards small things that can be easily digested and seem insightful, no more than one or maybe two inferential steps from the median voter. Nitpicking and jokes are both easily judged.
...Or, perhaps more correctly put, such a being (a non-all-knowing being who, however, “knows what you will do”) could not know for sure that he knows what all of the copies of you will do—because in order to know that, he would have be all-knowing—and so any statement to the effect that “he knows what you will do” is a highly questionable statement.
Just like a being who doesn’t know that he is all-knowing cannot reasonably be said to be all-knowing, a being who doesn’t know that he knows what all of the copies of you will do (because he doesn’t know how many copies of you there exist outside of the parts of the universe he has knowledge of) cannot reasonably be said to know what all of the copies of you will do.
Any conclusions, about how things work in the real world, drawn from Newcomb’s problem, crucially rest on the assumption that an all-knowing being might, at least theoretically, as a logically consistent concept, exist. If this crucial assumption is flawed, then any conclusions drawn from Newcomb’s problem are likely flawed too.
To be all-knowing, you’d have to know everything about everything, including everything about yourself. To contain all that knowledge, you’d have to be larger than it—otherwise there would be no matter or energy left to perform the activity of knowing it all. So, in order to be all-knowing, you’d have to be larger than yourself. Which is theoretically impossible. So, the Newcomb problem crucially rests on a faulty assumption: that something that is theoretically impossible might be theoretically possible.
So, conclusions drawn from Newcomb’s problem are no more valid than conclusions drawn from any other fairy tale. They are no more valid than, for example, the reasoning: “if an omnipotent and omniscient God would exist who would eventually reward all good humans with eternal bliss, all good humans would eventually be rewarded with eternal bliss → all good humans will eventually be rewarded with eternal bliss whether the existence of an omnipotent and omniscient God is even theoretically possible or not”.
One might think that Newcomb’s problem could be altered; one might think that instead of an “all-knowing being” it could assume the existence a non-all-knowing being that however knows what you will choose. But if the MWI is correct, or if the universe is otherwise infinitely large, not all of the infinitely many identical copies of you would be controlled by any such being. If they would, that would mean that that being would have to be all-knowing. Which, as shown, is not possible.
I disagree with that. The being in Newcomb’s problem wouldn’t have to be all-knowing. He would just have to know what everyone else is going to do conditional on his own actions. This would mean that any act of prediction would also cause the being to be faced with a choice about the outcome.
For example:
Suppose I am all-knowing, with the exception that I do not have full knowledge about myself. I am about to make a prediction, and then have a conversation with you, and then I am going to sit in a locked metal box for an hour. (Theoretically, you could argue that even then I would affect the outside world, but it will take time for chaos to become an issue, and I can factor that in.) You are about to go driving.
I predict that if I tell you that you will have a car accident in half an hour, you will drive carefully and will not have a car accident.
I also predict that if I do not tell you that you will have a car accident in half an hour, you will drive as usual and you will have a car accident.
I lack full self-knowledge. I cannot predict whether I will tell you until I actually decide to tell you.
I decide not to tell you. I get in my metal box and wait. I know that you will have a car accident in half an hour.
My lack of complete self-knowledge merely means that I do not do pure prediction: Instead any prediction I make is conditional on my own actions and therefore I get to choose which of a number of predictions comes true. (In reality, of course, the idea that I really had a “choice” in any free will sense is debatable, but my experience will be like that.)
It would be the same for Newcomb’s boxes. Now, you could argue that a paradox could be caused if the link between predictions and required actions would force Omega to break the rules of the game. For example, if Omega predicts that if he puts the money in both boxes, you will open both boxes, then clearly Omega can’t follow the rules. However, this would require some kind of causal link between Omega’s actions and the other players. There could be such a causal link. For example, while Omega is putting the money in the boxes, he may disturb weather patterns with his hands, and due to chaos theory make it rain on the other player on his way to play game, causing him to open both boxes. However, it should seem reasonable that Omega could manage his actions accordingly to control this: He may have to move his hands a particular way, or he may need to ensure that the game is played very soon after the boxes are loaded.
Hereinafter, “to Know x” means “to be objectively right about x, and to be subjectively 100 percent certain of x, and to have let the former ‘completely scientifically cause’ the latter (i.e. to have used the former to create the latter in a completely scientific manner), such that it cannot, even theoretically, be the case that something other than the former coincidentally and crucially misleadingly caused the latter—and to Know that all these criteria are met”.
Anything that I merely know (“know” being defined as people usually seem to implicitly define it in their use of it), as opposed to Know, may turn out to be wrong (for all that I know). It seems that the more our scientists know, the more they realize that they don’t know. Perhaps this “rule” holds forever, for every advancing civilisation (with negligible exceptions)? I think there could not even theoretically be any Knowing in the (or any) world. I conjecture that, much like it’s universally theoretically impossible to find a unique integer for every unique real, it’s universally theoretically impossible for any being to Know anything at all, such as for example what box(es) a human being will take.
Nick Bostrom’s Simulation Argument seems to show that any conceivable being that could theoretically exist might very well (for all he (that being) knows) be living in a computer simulation controlled by a mightier being than himself. This universal uncertainty means that no being could Know that he has perfect powers of prediction over anything whatsoever. Making a “correct prediction” partly due to luck isn’t having perfect powers of prediction, and a being who doesn’t Know what he is doing cannot predict anything correctly without at least some luck (because without luck, Murphy’s law holds). This means that no being could have perfect powers of prediction.
Now let “Omeg” be defined as the closest (in terms of knowledge of the world) to an all Knowing being (Omega) that could theoretically exist. Let A be defined as the part(s) of an Omeg that are fully known by the Omeg itself, and let B be defined as whatever else there may be in an Omeg. I suggest that in no Omeg of at least the size of the Milky Way can the B part be too small to secretly contain mechanisms that could be stealthily keeping the Omeg arbitrarily ignorant by having it falsely perceive arbitrarily much of its own wildest thought experiments (or whatever other unready thoughts it sometimes produces) to be knowledge (or even Knowledge). I therefore suggest that B, in any Omeg, could be keeping its Omeg under the impression that the A part is sufficient for correct prediction of, say, my choice of boxes, while in reality it isn’t. Conclusion: no theoretically possible being could perfectly predict any other being’s choice of boxes.
You may doubt it, but you can’t exclude the possibility. This means you also can’t exclude the possibility that whatever implications Newcomb’s problem seems to produce that wouldn’t occur to people if Omega were replaced by, say, a human psychologist, are implications that occur to people only because the assumption, that there could be such a thing as a perfect predictor of something/anything, is an assumption too unreasonable to be worthy of acceptance, as its crucial underpinnings don’t make sense (like it doesn’t make sense to assume that there is an integer for every real) - and as it can, because of this, be expected to produce arbitrarily misleading conclusions (about decision theory in this case) - much like many seemingly reasonable but heavily biased extreme thought experiments designed to smear utilitarianism scare even very skilled thinkers into drawing false conclusions about utilitarianism.
Or suppose someone goes to space, experiences weightlessness, thinks: “hey, why doesn’t my spaceship seem to exert any gravity on me?” and draws the conclusion: “it’s not gravity that keeps people down on Earth; it’s just that the Earth sucks”. Like that conclusion would be flawed, the conclusion that Newcomb’s problem shows that we should replace Causal Decision Theory with Evidential Decision Theory is flawed.
So, to be as faithful to the original Newcomb thought-experiment as is possible within reason, I’d interpret it in the way that just barely rids its premises of theoretical impossibility: I’d take Omega to mean Omeg, as defined above. An Omeg is fallible, but probably most of the time better than me at predicting my behavior, so I should definitely one-box, for the same reason that I should one-box if the predictor were a mere human being who just knew me very well. To risk a million dollar just to possibly get another 1000 dollar just isn’t worth it. Causal Decision Theory leads me to this conclusion just fine.
*) You might think B would be “the real” (or “another, smarter”) Omeg, by controlling A. But neither B nor A can rationally completely exclude the possibility that the other one of them is in secret control of both of them. So no one of them can have “perfect powers of prediction” over any being whatsoever.
I know nothing! Nothing!
My previous post resulted in 0 points, despite being very thoroughly thought-through. A comment on it, consisting of the four words “I know nothing! Nothing!” resulted in 4 points. If someone could please explain this, I’d be a grateful Goo.
That is unfortunate. You deserve a better explanation.
I believe a lot of the posters here (because they’re about as good as me at correct reasoning) did not read much of your exposition because toward the beginning, you posited a circumstance in which someone has 100% certainty of something. But this breaks all good epistemic models. One of the humans here provided a thorough explanation of why in the article 0 and 1 are not probabilities.
That, I believe, is why User:wedrifid found it insightful (as did 4 others) to say that User:wedrifid knows nothing, as per your standard, User:wedrifid knows nothing, since that User (like me and most others here) do not use 100% for any probability in our models.
Also, why do you call yourself “goo”? Wouldn’t you rather be something stronger?
If you introduce yourself in the introduction thread, perhaps explaining your name, you can gain some Karma. Currently, you seem to be below zero, which introduces waiting periods between comments. I had that problem when I first posted here, but you can overcome it!
I don’t know why your post got 0 points and no replies. But one of the reasons may be that it is hard to extract what the central point or conclusion you are trying to make is.
My comment gleaned 4 karma by taking the definition you introduce in the first sentence and tracing the implications using the reasoning Clippy mentions. This leads to the conclusion that I am literally in the epistemic state that is used in a hyperbolic sense by the character Shcultz from Hogans Heroes. While humour itself is hard to describe things that are surprising and include a contrast between distant concepts tend to qualify.
(By the way, the member Clippy is roleplaying an early iteration of an artificial intelligence with the goal of maximising paperclips—an example used to reference a broad group of unfriendly AIs that could be plausibly created by well meaning but idiotic programmers.)
I’m not role-playing, ape.
In general, the voting system doesn’t reward thought through, nor large wads of text. It rewards small things that can be easily digested and seem insightful, no more than one or maybe two inferential steps from the median voter. Nitpicking and jokes are both easily judged.
The opposite is true: large wads of text can be turned into top-level posts, which get tenfold karma.
...Or, perhaps more correctly put, such a being (a non-all-knowing being who, however, “knows what you will do”) could not know for sure that he knows what all of the copies of you will do—because in order to know that, he would have be all-knowing—and so any statement to the effect that “he knows what you will do” is a highly questionable statement.
Just like a being who doesn’t know that he is all-knowing cannot reasonably be said to be all-knowing, a being who doesn’t know that he knows what all of the copies of you will do (because he doesn’t know how many copies of you there exist outside of the parts of the universe he has knowledge of) cannot reasonably be said to know what all of the copies of you will do.