You are nitpicking. Fine, let’s say that Omega is likewise incapable of detecting whether you’ll have a heart-attack or be eaten by a pterodactyl. He just knows whether your mind is set on one-boxing or two-boxing.
Did this just remove all your objections about “omniscience” and Newcomb’s box, since Omega has now been established to not know if you’ll be eaten by a pterodactyl before choosing a box? If so, I suggest we make Omega being incapable of determining death-by-pterodactyl a permanent feature of Omega’s character.
Isn’t it the case that in the description of the issue Omega has made enough correct predictions that we should expect that many people have had strokes, survived, and went on to make the decision which was predicted? Is it reasonable that we should expect several of the people who have had strokes to make a different decision afterwards? (substitute all influences on the individual that Omega doesn’t model if that helps)
If you correctly predicted the outcome of a program enough times that several things that you didn’t take into account would be expected to happen many times, or if you correctly answered “failed to complete” for one case where bit rot caused the system to crash instead of outputting “Hello World”, I would update on your omniscience.
Isn’t it the case that in the description of the issue Omega has made enough correct predictions that we should expect that manypeople have had strokes, survived, and went on to make the decision which was predicted
More nitpicking, more absurd fighting against the hypothetical. Omega isn’t specified as having made the prediction a year ago, he may have just examined your brain and made the prediction two seconds before he fills the boxes accordingly and presents you the problem. How many people do you expect to have had strokes in those two seconds?
Look, if you resort to “people could have had strokes in the time since Omega made his prediction which would have drastically changed their brain-chemistry”, then you’re reaching ludicrous attempts of desperation in attempting to defend an undefensible position. Please try to picture how this type of argument looks from an external point of view—they really don’t look good.
Some arguments are so bad, that to speak them is evidence of the lack of a better argument, and if anything strengthen the opposite position.
Two seconds from prediction end to decision: There are ~5e6 deaths due to stroke annually in the world (population ~5e9) That’s a rate of about 1e3 person-years per death due to stroke, or 3e10 person-seconds per stroke death.
If Omega only asked everyone on the planet once, at a random time, and it took them only two seconds to respond, it’s about even money that one of them would die from a stroke before they could answer. (Plus one who had the onset of a fatal stroke during that period but made a choice prior to dying, and an ungodly number who were currently experiencing one).
Rationally? Only if I had sufficient evidence that I was 1-in-a-billion with regards to Omega being wrong to pull my posterior estimate to less than about 50%. In actuality? Probably if I thought there was at least a one in six chance that I could pull it off, based on gut feelings.
Even assuming that the average person can decide in two seconds...
Omega can’t predict the stroke on the two-second timeframe; it’s too busy finishing the simulation of the player’s brain that started four seconds ago to notice that he’s going to throw a clot as soon as he stands up. (Because Omega has to perform a limited simulation of the universe in order to complete the simulation before the universe does; in the extreme case, I allow a gamma or some other particle to interact with a sodium ion and trigger a neuron that makes the prediction wrong. Omega can’t predict that without directly breaking physics as we know it.
You are wrong. There can be a power failure at least one time when that program runs, and you have not identified when those will be.
You are nitpicking. Fine, let’s say that Omega is likewise incapable of detecting whether you’ll have a heart-attack or be eaten by a pterodactyl. He just knows whether your mind is set on one-boxing or two-boxing.
Did this just remove all your objections about “omniscience” and Newcomb’s box, since Omega has now been established to not know if you’ll be eaten by a pterodactyl before choosing a box? If so, I suggest we make Omega being incapable of determining death-by-pterodactyl a permanent feature of Omega’s character.
Isn’t it the case that in the description of the issue Omega has made enough correct predictions that we should expect that many people have had strokes, survived, and went on to make the decision which was predicted? Is it reasonable that we should expect several of the people who have had strokes to make a different decision afterwards? (substitute all influences on the individual that Omega doesn’t model if that helps)
If you correctly predicted the outcome of a program enough times that several things that you didn’t take into account would be expected to happen many times, or if you correctly answered “failed to complete” for one case where bit rot caused the system to crash instead of outputting “Hello World”, I would update on your omniscience.
More nitpicking, more absurd fighting against the hypothetical. Omega isn’t specified as having made the prediction a year ago, he may have just examined your brain and made the prediction two seconds before he fills the boxes accordingly and presents you the problem. How many people do you expect to have had strokes in those two seconds?
Look, if you resort to “people could have had strokes in the time since Omega made his prediction which would have drastically changed their brain-chemistry”, then you’re reaching ludicrous attempts of desperation in attempting to defend an undefensible position. Please try to picture how this type of argument looks from an external point of view—they really don’t look good.
Some arguments are so bad, that to speak them is evidence of the lack of a better argument, and if anything strengthen the opposite position.
Two seconds from prediction end to decision: There are ~5e6 deaths due to stroke annually in the world (population ~5e9) That’s a rate of about 1e3 person-years per death due to stroke, or 3e10 person-seconds per stroke death.
If Omega only asked everyone on the planet once, at a random time, and it took them only two seconds to respond, it’s about even money that one of them would die from a stroke before they could answer. (Plus one who had the onset of a fatal stroke during that period but made a choice prior to dying, and an ungodly number who were currently experiencing one).
The problem also works if Omega’s failure rate is 1 in 1.5e10 or even much larger, so long as it’s much smaller than about 50.05%.
Assume Omega has been observed to get the predictions right 999,999,999 times every billion. Would you two-boxes in hope that it gets you wrong?
Rationally? Only if I had sufficient evidence that I was 1-in-a-billion with regards to Omega being wrong to pull my posterior estimate to less than about 50%. In actuality? Probably if I thought there was at least a one in six chance that I could pull it off, based on gut feelings.
What, and Omega can’t figure out how the stroke will affect their cognition on a two second timeframe?
Even assuming that the average person can decide in two seconds...
Omega can’t predict the stroke on the two-second timeframe; it’s too busy finishing the simulation of the player’s brain that started four seconds ago to notice that he’s going to throw a clot as soon as he stands up. (Because Omega has to perform a limited simulation of the universe in order to complete the simulation before the universe does; in the extreme case, I allow a gamma or some other particle to interact with a sodium ion and trigger a neuron that makes the prediction wrong. Omega can’t predict that without directly breaking physics as we know it.
You think you know a great deal more about Omega than the hypothetical allows you to deduce.
The gamma rays will produce a very very small error rate.
The hypothetical only allows for zero error; if Omega knows everything that I will encounter, Omega has superluminal information. QED.