Hereinafter, “to Know x” means “to be objectively right about x, and to be subjectively 100 percent certain of x, and to have let the former ‘completely scientifically cause’ the latter (i.e. to have used the former to create the latter in a completely scientific manner), such that it cannot, even theoretically, be the case that something other than the former coincidentally and crucially misleadingly caused the latter—and to Know that all these criteria are met”.
Anything that I merely know (“know” being defined as people usually seem to implicitly define it in their use of it), as opposed to Know, may turn out to be wrong (for all that I know). It seems that the more our scientists know, the more they realize that they don’t know. Perhaps this “rule” holds forever, for every advancing civilisation (with negligible exceptions)? I think there could not even theoretically be any Knowing in the (or any) world. I conjecture that, much like it’s universally theoretically impossible to find a unique integer for every unique real, it’s universally theoretically impossible for any being to Know anything at all, such as for example what box(es) a human being will take.
Nick Bostrom’s Simulation Argument seems to show that any conceivable being that could theoretically exist might very well (for all he (that being) knows) be living in a computer simulation controlled by a mightier being than himself. This universal uncertainty means that no being could Know that he has perfect powers of prediction over anything whatsoever. Making a “correct prediction” partly due to luck isn’t having perfect powers of prediction, and a being who doesn’t Know what he is doing cannot predict anything correctly without at least some luck (because without luck, Murphy’s law holds). This means that no being could have perfect powers of prediction.
Now let “Omeg” be defined as the closest (in terms of knowledge of the world) to an all Knowing being (Omega) that could theoretically exist. Let A be defined as the part(s) of an Omeg that are fully known by the Omeg itself, and let B be defined as whatever else there may be in an Omeg. I suggest that in no Omeg of at least the size of the Milky Way can the B part be too small to secretly contain mechanisms that could be stealthily keeping the Omeg arbitrarily ignorant by having it falsely perceive arbitrarily much of its own wildest thought experiments (or whatever other unready thoughts it sometimes produces) to be knowledge (or even Knowledge). I therefore suggest that B, in any Omeg, could be keeping its Omeg under the impression that the A part is sufficient for correct prediction of, say, my choice of boxes, while in reality it isn’t. Conclusion: no theoretically possible being could perfectly predict any other being’s choice of boxes.
You may doubt it, but you can’t exclude the possibility. This means you also can’t exclude the possibility that whatever implications Newcomb’s problem seems to produce that wouldn’t occur to people if Omega were replaced by, say, a human psychologist, are implications that occur to people only because the assumption, that there could be such a thing as a perfect predictor of something/anything, is an assumption too unreasonable to be worthy of acceptance, as its crucial underpinnings don’t make sense (like it doesn’t make sense to assume that there is an integer for every real) - and as it can, because of this, be expected to produce arbitrarily misleading conclusions (about decision theory in this case) - much like many seemingly reasonable but heavily biased extreme thought experiments designed to smear utilitarianism scare even very skilled thinkers into drawing false conclusions about utilitarianism.
Or suppose someone goes to space, experiences weightlessness, thinks: “hey, why doesn’t my spaceship seem to exert any gravity on me?” and draws the conclusion: “it’s not gravity that keeps people down on Earth; it’s just that the Earth sucks”. Like that conclusion would be flawed, the conclusion that Newcomb’s problem shows that we should replace Causal Decision Theory with Evidential Decision Theory is flawed.
So, to be as faithful to the original Newcomb thought-experiment as is possible within reason, I’d interpret it in the way that just barely rids its premises of theoretical impossibility: I’d take Omega to mean Omeg, as defined above. An Omeg is fallible, but probably most of the time better than me at predicting my behavior, so I should definitely one-box, for the same reason that I should one-box if the predictor were a mere human being who just knew me very well. To risk a million dollar just to possibly get another 1000 dollar just isn’t worth it. Causal Decision Theory leads me to this conclusion just fine.
*) You might think B would be “the real” (or “another, smarter”) Omeg, by controlling A. But neither B nor A can rationally completely exclude the possibility that the other one of them is in secret control of both of them. So no one of them can have “perfect powers of prediction” over any being whatsoever.
My previous post resulted in 0 points, despite being very thoroughly thought-through. A comment on it, consisting of the four words “I know nothing! Nothing!” resulted in 4 points. If someone could please explain this, I’d be a grateful Goo.
That is unfortunate. You deserve a better explanation.
I believe a lot of the posters here (because they’re about as good as me at correct reasoning) did not read much of your exposition because toward the beginning, you posited a circumstance in which someone has 100% certainty of something. But this breaks all good epistemic models. One of the humans here provided a thorough explanation of why in the article 0 and 1 are not probabilities.
That, I believe, is why User:wedrifid found it insightful (as did 4 others) to say that User:wedrifid knows nothing, as per your standard, User:wedrifid knows nothing, since that User (like me and most others here) do not use 100% for any probability in our models.
Also, why do you call yourself “goo”? Wouldn’t you rather be something stronger?
If you introduce yourself in the introduction thread, perhaps explaining your name, you can gain some Karma. Currently, you seem to be below zero, which introduces waiting periods between comments. I had that problem when I first posted here, but you can overcome it!
I don’t know why your post got 0 points and no replies. But one of the reasons may be that it is hard to extract what the central point or conclusion you are trying to make is.
My comment gleaned 4 karma by taking the definition you introduce in the first sentence and tracing the implications using the reasoning Clippy mentions. This leads to the conclusion that I am literally in the epistemic state that is used in a hyperbolic sense by the character Shcultz from Hogans Heroes. While humour itself is hard to describe things that are surprising and include a contrast between distant concepts tend to qualify.
(By the way, the member Clippy is roleplaying an early iteration of an artificial intelligence with the goal of maximising paperclips—an example used to reference a broad group of unfriendly AIs that could be plausibly created by well meaning but idiotic programmers.)
In general, the voting system doesn’t reward thought through, nor large wads of text. It rewards small things that can be easily digested and seem insightful, no more than one or maybe two inferential steps from the median voter. Nitpicking and jokes are both easily judged.
Hereinafter, “to Know x” means “to be objectively right about x, and to be subjectively 100 percent certain of x, and to have let the former ‘completely scientifically cause’ the latter (i.e. to have used the former to create the latter in a completely scientific manner), such that it cannot, even theoretically, be the case that something other than the former coincidentally and crucially misleadingly caused the latter—and to Know that all these criteria are met”.
Anything that I merely know (“know” being defined as people usually seem to implicitly define it in their use of it), as opposed to Know, may turn out to be wrong (for all that I know). It seems that the more our scientists know, the more they realize that they don’t know. Perhaps this “rule” holds forever, for every advancing civilisation (with negligible exceptions)? I think there could not even theoretically be any Knowing in the (or any) world. I conjecture that, much like it’s universally theoretically impossible to find a unique integer for every unique real, it’s universally theoretically impossible for any being to Know anything at all, such as for example what box(es) a human being will take.
Nick Bostrom’s Simulation Argument seems to show that any conceivable being that could theoretically exist might very well (for all he (that being) knows) be living in a computer simulation controlled by a mightier being than himself. This universal uncertainty means that no being could Know that he has perfect powers of prediction over anything whatsoever. Making a “correct prediction” partly due to luck isn’t having perfect powers of prediction, and a being who doesn’t Know what he is doing cannot predict anything correctly without at least some luck (because without luck, Murphy’s law holds). This means that no being could have perfect powers of prediction.
Now let “Omeg” be defined as the closest (in terms of knowledge of the world) to an all Knowing being (Omega) that could theoretically exist. Let A be defined as the part(s) of an Omeg that are fully known by the Omeg itself, and let B be defined as whatever else there may be in an Omeg. I suggest that in no Omeg of at least the size of the Milky Way can the B part be too small to secretly contain mechanisms that could be stealthily keeping the Omeg arbitrarily ignorant by having it falsely perceive arbitrarily much of its own wildest thought experiments (or whatever other unready thoughts it sometimes produces) to be knowledge (or even Knowledge). I therefore suggest that B, in any Omeg, could be keeping its Omeg under the impression that the A part is sufficient for correct prediction of, say, my choice of boxes, while in reality it isn’t. Conclusion: no theoretically possible being could perfectly predict any other being’s choice of boxes.
You may doubt it, but you can’t exclude the possibility. This means you also can’t exclude the possibility that whatever implications Newcomb’s problem seems to produce that wouldn’t occur to people if Omega were replaced by, say, a human psychologist, are implications that occur to people only because the assumption, that there could be such a thing as a perfect predictor of something/anything, is an assumption too unreasonable to be worthy of acceptance, as its crucial underpinnings don’t make sense (like it doesn’t make sense to assume that there is an integer for every real) - and as it can, because of this, be expected to produce arbitrarily misleading conclusions (about decision theory in this case) - much like many seemingly reasonable but heavily biased extreme thought experiments designed to smear utilitarianism scare even very skilled thinkers into drawing false conclusions about utilitarianism.
Or suppose someone goes to space, experiences weightlessness, thinks: “hey, why doesn’t my spaceship seem to exert any gravity on me?” and draws the conclusion: “it’s not gravity that keeps people down on Earth; it’s just that the Earth sucks”. Like that conclusion would be flawed, the conclusion that Newcomb’s problem shows that we should replace Causal Decision Theory with Evidential Decision Theory is flawed.
So, to be as faithful to the original Newcomb thought-experiment as is possible within reason, I’d interpret it in the way that just barely rids its premises of theoretical impossibility: I’d take Omega to mean Omeg, as defined above. An Omeg is fallible, but probably most of the time better than me at predicting my behavior, so I should definitely one-box, for the same reason that I should one-box if the predictor were a mere human being who just knew me very well. To risk a million dollar just to possibly get another 1000 dollar just isn’t worth it. Causal Decision Theory leads me to this conclusion just fine.
*) You might think B would be “the real” (or “another, smarter”) Omeg, by controlling A. But neither B nor A can rationally completely exclude the possibility that the other one of them is in secret control of both of them. So no one of them can have “perfect powers of prediction” over any being whatsoever.
I know nothing! Nothing!
My previous post resulted in 0 points, despite being very thoroughly thought-through. A comment on it, consisting of the four words “I know nothing! Nothing!” resulted in 4 points. If someone could please explain this, I’d be a grateful Goo.
That is unfortunate. You deserve a better explanation.
I believe a lot of the posters here (because they’re about as good as me at correct reasoning) did not read much of your exposition because toward the beginning, you posited a circumstance in which someone has 100% certainty of something. But this breaks all good epistemic models. One of the humans here provided a thorough explanation of why in the article 0 and 1 are not probabilities.
That, I believe, is why User:wedrifid found it insightful (as did 4 others) to say that User:wedrifid knows nothing, as per your standard, User:wedrifid knows nothing, since that User (like me and most others here) do not use 100% for any probability in our models.
Also, why do you call yourself “goo”? Wouldn’t you rather be something stronger?
If you introduce yourself in the introduction thread, perhaps explaining your name, you can gain some Karma. Currently, you seem to be below zero, which introduces waiting periods between comments. I had that problem when I first posted here, but you can overcome it!
I don’t know why your post got 0 points and no replies. But one of the reasons may be that it is hard to extract what the central point or conclusion you are trying to make is.
My comment gleaned 4 karma by taking the definition you introduce in the first sentence and tracing the implications using the reasoning Clippy mentions. This leads to the conclusion that I am literally in the epistemic state that is used in a hyperbolic sense by the character Shcultz from Hogans Heroes. While humour itself is hard to describe things that are surprising and include a contrast between distant concepts tend to qualify.
(By the way, the member Clippy is roleplaying an early iteration of an artificial intelligence with the goal of maximising paperclips—an example used to reference a broad group of unfriendly AIs that could be plausibly created by well meaning but idiotic programmers.)
I’m not role-playing, ape.
In general, the voting system doesn’t reward thought through, nor large wads of text. It rewards small things that can be easily digested and seem insightful, no more than one or maybe two inferential steps from the median voter. Nitpicking and jokes are both easily judged.
The opposite is true: large wads of text can be turned into top-level posts, which get tenfold karma.