Is the Predictor omniscient or making a prediction?
A tangent: when I worked at a teen homeless shelter there would sometimes be a choice for clients to get a little something now or more later. Now won every time, later never. Anything close to a bird in hand was valued more than a billion ultra birds not in the hand. A lifetime of being betrayed by adults, or poor future skills, or both and more might be why that happened. Two boxes without any doubt for those guys. As Predictors they would always predict two boxes and be right.
He makes a statement about the future which, when evaluated, is true. What’s the difference between accurate predictions and omniscience?
On that tangent: WTF? Who creates a system in which they can offer either some help now, or significantly more later, unless they are malicious or running an experiment?
He makes a statement about the future which, when evaluated, is true. What’s the difference between accurate predictions and omniscience?”
So when I look at the source code of a program and state “this program will throw a NullPointerException when executed” or “this program will go into endless loop” or “this program will print out ‘Hello World’” I’m being omniscient?
Look, I’m not discussing Omega or Newcomb here. Did you just call ME omniscient because in real life I can predict the outcome of simple programs?
You are nitpicking. Fine, let’s say that Omega is likewise incapable of detecting whether you’ll have a heart-attack or be eaten by a pterodactyl. He just knows whether your mind is set on one-boxing or two-boxing.
Did this just remove all your objections about “omniscience” and Newcomb’s box, since Omega has now been established to not know if you’ll be eaten by a pterodactyl before choosing a box? If so, I suggest we make Omega being incapable of determining death-by-pterodactyl a permanent feature of Omega’s character.
Isn’t it the case that in the description of the issue Omega has made enough correct predictions that we should expect that many people have had strokes, survived, and went on to make the decision which was predicted? Is it reasonable that we should expect several of the people who have had strokes to make a different decision afterwards? (substitute all influences on the individual that Omega doesn’t model if that helps)
If you correctly predicted the outcome of a program enough times that several things that you didn’t take into account would be expected to happen many times, or if you correctly answered “failed to complete” for one case where bit rot caused the system to crash instead of outputting “Hello World”, I would update on your omniscience.
Isn’t it the case that in the description of the issue Omega has made enough correct predictions that we should expect that manypeople have had strokes, survived, and went on to make the decision which was predicted
More nitpicking, more absurd fighting against the hypothetical. Omega isn’t specified as having made the prediction a year ago, he may have just examined your brain and made the prediction two seconds before he fills the boxes accordingly and presents you the problem. How many people do you expect to have had strokes in those two seconds?
Look, if you resort to “people could have had strokes in the time since Omega made his prediction which would have drastically changed their brain-chemistry”, then you’re reaching ludicrous attempts of desperation in attempting to defend an undefensible position. Please try to picture how this type of argument looks from an external point of view—they really don’t look good.
Some arguments are so bad, that to speak them is evidence of the lack of a better argument, and if anything strengthen the opposite position.
Two seconds from prediction end to decision: There are ~5e6 deaths due to stroke annually in the world (population ~5e9) That’s a rate of about 1e3 person-years per death due to stroke, or 3e10 person-seconds per stroke death.
If Omega only asked everyone on the planet once, at a random time, and it took them only two seconds to respond, it’s about even money that one of them would die from a stroke before they could answer. (Plus one who had the onset of a fatal stroke during that period but made a choice prior to dying, and an ungodly number who were currently experiencing one).
Rationally? Only if I had sufficient evidence that I was 1-in-a-billion with regards to Omega being wrong to pull my posterior estimate to less than about 50%. In actuality? Probably if I thought there was at least a one in six chance that I could pull it off, based on gut feelings.
Even assuming that the average person can decide in two seconds...
Omega can’t predict the stroke on the two-second timeframe; it’s too busy finishing the simulation of the player’s brain that started four seconds ago to notice that he’s going to throw a clot as soon as he stands up. (Because Omega has to perform a limited simulation of the universe in order to complete the simulation before the universe does; in the extreme case, I allow a gamma or some other particle to interact with a sodium ion and trigger a neuron that makes the prediction wrong. Omega can’t predict that without directly breaking physics as we know it.
WTF? Who creates a system in which they can offer either some help now, or significantly more later, unless they are malicious or running an experiment?
Situations where you can get something now or something better later but not both come up all the time as consequences of growth, investment, logistics, or even just basic availability issues. I expect it would usually make more sense to do this analysis yourself and only offer the option that does more long-term good, but if clients’ needs differ and you don’t have a good way of estimating, it may make sense to allow them to choose.
Not that it’s much of an offer if you can reliably predict the way the vast majority of them will go.
If you can make the offer right now, you don’t have capital tied up in growth, investment, or logistics. Particularly since what you have available now doesn’t cover the current need—all of it will be taken by somebody.
If the Predictor is accurate or omniscient, then the game is rigged and it becomes a different problem. If the Predictor is making guesses then box predicting and box selecting are both interesting to figure out.
WTF? Who creates a system in which they can offer either some help now, or significantly more later, unless they are malicious or running an experiment?
Or you live in a system nobody in particular created (capitalism) and work at a social service with limited resources with a clientele who have no background experience with adults who can be trusted. An employer telling hem “work now and I’ll pay you later” is not convincing, while a peanut butter sandwich right now is.
How about “Here’s a sandwich, if you work for me there I will give you another one at lunchtime and money at the end of the day.”
It’s the case where the immediate reward has to be so much smaller than the delayed reward but still be mutually exclusive that confuses me, not the discounting due to lack of trust.
What does always choosing some now over more later have to do with Newcomb’s problem?
Simply stating that box B either contains a million dollars or nothing will make people see the million dollars as more distant than the guaranteed thousand in box A, I imagine. That the probabilities reduce that distance to negligible matters only if the person updates appropriately on that information.
Is the Predictor omniscient or making a prediction?
A tangent: when I worked at a teen homeless shelter there would sometimes be a choice for clients to get a little something now or more later. Now won every time, later never. Anything close to a bird in hand was valued more than a billion ultra birds not in the hand. A lifetime of being betrayed by adults, or poor future skills, or both and more might be why that happened. Two boxes without any doubt for those guys. As Predictors they would always predict two boxes and be right.
He makes a statement about the future which, when evaluated, is true. What’s the difference between accurate predictions and omniscience?
On that tangent: WTF? Who creates a system in which they can offer either some help now, or significantly more later, unless they are malicious or running an experiment?
So when I look at the source code of a program and state “this program will throw a NullPointerException when executed” or “this program will go into endless loop” or “this program will print out ‘Hello World’” I’m being omniscient?
Look, I’m not discussing Omega or Newcomb here. Did you just call ME omniscient because in real life I can predict the outcome of simple programs?
You are wrong. There can be a power failure at least one time when that program runs, and you have not identified when those will be.
You are nitpicking. Fine, let’s say that Omega is likewise incapable of detecting whether you’ll have a heart-attack or be eaten by a pterodactyl. He just knows whether your mind is set on one-boxing or two-boxing.
Did this just remove all your objections about “omniscience” and Newcomb’s box, since Omega has now been established to not know if you’ll be eaten by a pterodactyl before choosing a box? If so, I suggest we make Omega being incapable of determining death-by-pterodactyl a permanent feature of Omega’s character.
Isn’t it the case that in the description of the issue Omega has made enough correct predictions that we should expect that many people have had strokes, survived, and went on to make the decision which was predicted? Is it reasonable that we should expect several of the people who have had strokes to make a different decision afterwards? (substitute all influences on the individual that Omega doesn’t model if that helps)
If you correctly predicted the outcome of a program enough times that several things that you didn’t take into account would be expected to happen many times, or if you correctly answered “failed to complete” for one case where bit rot caused the system to crash instead of outputting “Hello World”, I would update on your omniscience.
More nitpicking, more absurd fighting against the hypothetical. Omega isn’t specified as having made the prediction a year ago, he may have just examined your brain and made the prediction two seconds before he fills the boxes accordingly and presents you the problem. How many people do you expect to have had strokes in those two seconds?
Look, if you resort to “people could have had strokes in the time since Omega made his prediction which would have drastically changed their brain-chemistry”, then you’re reaching ludicrous attempts of desperation in attempting to defend an undefensible position. Please try to picture how this type of argument looks from an external point of view—they really don’t look good.
Some arguments are so bad, that to speak them is evidence of the lack of a better argument, and if anything strengthen the opposite position.
Two seconds from prediction end to decision: There are ~5e6 deaths due to stroke annually in the world (population ~5e9) That’s a rate of about 1e3 person-years per death due to stroke, or 3e10 person-seconds per stroke death.
If Omega only asked everyone on the planet once, at a random time, and it took them only two seconds to respond, it’s about even money that one of them would die from a stroke before they could answer. (Plus one who had the onset of a fatal stroke during that period but made a choice prior to dying, and an ungodly number who were currently experiencing one).
The problem also works if Omega’s failure rate is 1 in 1.5e10 or even much larger, so long as it’s much smaller than about 50.05%.
Assume Omega has been observed to get the predictions right 999,999,999 times every billion. Would you two-boxes in hope that it gets you wrong?
Rationally? Only if I had sufficient evidence that I was 1-in-a-billion with regards to Omega being wrong to pull my posterior estimate to less than about 50%. In actuality? Probably if I thought there was at least a one in six chance that I could pull it off, based on gut feelings.
What, and Omega can’t figure out how the stroke will affect their cognition on a two second timeframe?
Even assuming that the average person can decide in two seconds...
Omega can’t predict the stroke on the two-second timeframe; it’s too busy finishing the simulation of the player’s brain that started four seconds ago to notice that he’s going to throw a clot as soon as he stands up. (Because Omega has to perform a limited simulation of the universe in order to complete the simulation before the universe does; in the extreme case, I allow a gamma or some other particle to interact with a sodium ion and trigger a neuron that makes the prediction wrong. Omega can’t predict that without directly breaking physics as we know it.
You think you know a great deal more about Omega than the hypothetical allows you to deduce.
The gamma rays will produce a very very small error rate.
The hypothetical only allows for zero error; if Omega knows everything that I will encounter, Omega has superluminal information. QED.
Situations where you can get something now or something better later but not both come up all the time as consequences of growth, investment, logistics, or even just basic availability issues. I expect it would usually make more sense to do this analysis yourself and only offer the option that does more long-term good, but if clients’ needs differ and you don’t have a good way of estimating, it may make sense to allow them to choose.
Not that it’s much of an offer if you can reliably predict the way the vast majority of them will go.
If you can make the offer right now, you don’t have capital tied up in growth, investment, or logistics. Particularly since what you have available now doesn’t cover the current need—all of it will be taken by somebody.
If the Predictor is accurate or omniscient, then the game is rigged and it becomes a different problem. If the Predictor is making guesses then box predicting and box selecting are both interesting to figure out.
Or you live in a system nobody in particular created (capitalism) and work at a social service with limited resources with a clientele who have no background experience with adults who can be trusted. An employer telling hem “work now and I’ll pay you later” is not convincing, while a peanut butter sandwich right now is.
How about “Here’s a sandwich, if you work for me there I will give you another one at lunchtime and money at the end of the day.”
It’s the case where the immediate reward has to be so much smaller than the delayed reward but still be mutually exclusive that confuses me, not the discounting due to lack of trust.
What does always choosing some now over more later have to do with Newcomb’s problem?
Simply stating that box B either contains a million dollars or nothing will make people see the million dollars as more distant than the guaranteed thousand in box A, I imagine. That the probabilities reduce that distance to negligible matters only if the person updates appropriately on that information.