I hope I’m not being redundant, but… The common argument I’ve seen is that it must be backward causation if one boxing predictably comes out with more money than two boxing.
Why can’t it just be that Omega is really, really good at cognitive psychology, has a complete map of your brain, and is able to use that to predict your decision so well that the odds of Omega’s prediction being wrong are epsilon? This just seemed… well, obvious to me. But most people arguing “backward causation!” seem to be smarter than me.
The possibilities I see are either that I’m seriously missing something here, or even really smart people can’t let go of the idea that our brains are free from physical law on some level.
The entire point of Omega seems to be “Yeah, no, free will isn’t as powerful as you seem to think.” Given 100 people and access to a few megabytes of their conversations, contacts lists, facebook, TShirt collection and radio/television/web-surfing habits, you can probably make a prediction about how they’ll vote in the next election that will do better than chance. Omega is implied here to have far better models of people than targeted advertising. What success rate would it take to convince people that Omega isn’t cheating, but is just really, really clever?
Of course, Omega’s abilities aren’t really specified. Maybe it is using timetravel. But the laws of physics as we know them seem to favor “Omega understands the human brain” over “Omega can see into the future”, so if this happened in the real world, backward causation would not be my leading hypothesis.
Of course, the hypothesis “Omega cheats with some remote-controlled mechanism inside box B” is even easier than explaining an alien superintelligence with an amazing understanding of individual brains. If we could examine box B after 1boxing and 2boxing, we could probably adjust the probability on the “Omega cheats” hypothesis. I don’t know how to distinguish the backwards causation and perfect brain model hypotheses, though.
Of course, the point of the original post wasn’t “reverse engineer Omega’s methods”. The point was “Make decisions that predictably succeed, not decisions that predictably fail but are otherwise more reasonable”. Omega’s methods are relevant only if they allow us to make better decisions than we would with the given information.
I hope I’m not being redundant, but… The common argument I’ve seen is that it must be backward causation if one boxing predictably comes out with more money than two boxing.
Why can’t it just be that Omega is really, really good at cognitive psychology, has a complete map of your brain, and is able to use that to predict your decision so well that the odds of Omega’s prediction being wrong are epsilon? This just seemed… well, obvious to me. But most people arguing “backward causation!” seem to be smarter than me.
The possibilities I see are either that I’m seriously missing something here, or even really smart people can’t let go of the idea that our brains are free from physical law on some level.
The entire point of Omega seems to be “Yeah, no, free will isn’t as powerful as you seem to think.” Given 100 people and access to a few megabytes of their conversations, contacts lists, facebook, TShirt collection and radio/television/web-surfing habits, you can probably make a prediction about how they’ll vote in the next election that will do better than chance. Omega is implied here to have far better models of people than targeted advertising. What success rate would it take to convince people that Omega isn’t cheating, but is just really, really clever?
Of course, Omega’s abilities aren’t really specified. Maybe it is using timetravel. But the laws of physics as we know them seem to favor “Omega understands the human brain” over “Omega can see into the future”, so if this happened in the real world, backward causation would not be my leading hypothesis.
Of course, the hypothesis “Omega cheats with some remote-controlled mechanism inside box B” is even easier than explaining an alien superintelligence with an amazing understanding of individual brains. If we could examine box B after 1boxing and 2boxing, we could probably adjust the probability on the “Omega cheats” hypothesis. I don’t know how to distinguish the backwards causation and perfect brain model hypotheses, though.
Of course, the point of the original post wasn’t “reverse engineer Omega’s methods”. The point was “Make decisions that predictably succeed, not decisions that predictably fail but are otherwise more reasonable”. Omega’s methods are relevant only if they allow us to make better decisions than we would with the given information.