What hypotheticals in the Newcomb problem might one have to fight, if this be one of those times?
The hypothetical I would fight is that the universe is perfectly predictable. Here’s how I fight it:
In order to be perfectly predictable, the universe must be deterministic. But it is possible for the universe to be deterministic but unpredictable. Here’s how.
For perfect prediction of the universe, the universe must be COMPLETELY simulated. The mechanism to simulate the universe must have memory sufficient to store the state of the universe completely. But that storage mechanism must then store its own state completely, PLUS the rest of the universe. And of course inside the state stored, must be a complete copy of the stored information, PLUS the rest of the universe.
From this I conclude, the only mechanism that can store the entire state of the universe is the universe itself. as long as “PLUS the rest of the universe” is not an empty set, then the requirements for the mechanism which can store the state of the universe is unbounded.
If the only mechanism which can store the entire state of the universe is the universe itself, then the only thing that “knows” everything that will happen is the future state of the universe, and the calculation takes as long as it takes for the thing to actually happen.
So Omega is then the entire universe, but Omega is not able to calculate ahead of time what you will do, she can only complete her calculation at precisely the time you do the thing.
One weakness I see in my argument is that the universe might be infinite in such a way that it CAN contain complete copies of itself, each of which would then contain copies of the copies recursively. In this case, Omega contains a copy of the universe and does her calculations. Are we happy to constrain the universe in this way as a matter of generality? Or does saying that Omega only exists in universes which are infinite in such a way as to be able to contain multiple complete copies of themselves present an interesting limit on Omega?
Just to motivate why Omega would need to simulate the whole universe completely, I will decide to one box or two box based on whether a particular small volume of space has more than a certain amount of mass in it or not. The volume of space I will pick is some appropriately small volume so that 1⁄2 the volumes do and 1⁄2 the volumes don’t. The volume of space I will pick is located a distance c*T+ 1 hour from us, where T is the age of the universe. Then my decision depends on a part of the universe which is beyond the sphere of the currently known universe at the time Omega loads the boxes, but which I will be able to observe with a suitable telescope before I have to choose whether to one box or two-box. SO Omega will have to simulate the entire universe over its entire lifetime, or some such similar scale of calculation, in order to predict what I will do.
SO, do we need a timeless decision theory, or do we need to fight the hypothetical?
God f*ing damn it. Again? He has 99.9% accuracy, problem resolved. Every decision remains identical unless a change of 1/1000 in your calculations causes a different action, which in Newcomboid problems it never should.
Note to anyone and everyone who encounters any sort of hypothetical with a “perfect” predictor; if you write it, always state an error rate, and if you read it then assume one (but not one higher than whatever error rate would make a TDT agent choose to two-box.)
For perfect prediction of the universe, the universe must be COMPLETELY simulated. The mechanism to simulate the universe must have memory sufficient to store the state of the universe completely. But that storage mechanism must then store its own state completely, PLUS the rest of the universe. And of course inside the state stored, must be a complete copy of the stored information, PLUS the rest of the universe.
The mechanism can just store a reference to itself.
The mechanism can just store a reference to itself.
Actually this will not work. Since Omega would be running a simulation of the universe, including a simulation of his own simulation of the universe, the memory space for the simulation of the simulation and for the simulation would need to be distinct as they would not contain the same values as the simulation went on.
As Eliezer points out recently, sometimes you do have to fight the hypothetical.
What hypotheticals in the Newcomb problem might one have to fight, if this be one of those times?
The hypothetical I would fight is that the universe is perfectly predictable. Here’s how I fight it:
In order to be perfectly predictable, the universe must be deterministic. But it is possible for the universe to be deterministic but unpredictable. Here’s how.
For perfect prediction of the universe, the universe must be COMPLETELY simulated. The mechanism to simulate the universe must have memory sufficient to store the state of the universe completely. But that storage mechanism must then store its own state completely, PLUS the rest of the universe. And of course inside the state stored, must be a complete copy of the stored information, PLUS the rest of the universe.
From this I conclude, the only mechanism that can store the entire state of the universe is the universe itself. as long as “PLUS the rest of the universe” is not an empty set, then the requirements for the mechanism which can store the state of the universe is unbounded.
If the only mechanism which can store the entire state of the universe is the universe itself, then the only thing that “knows” everything that will happen is the future state of the universe, and the calculation takes as long as it takes for the thing to actually happen.
So Omega is then the entire universe, but Omega is not able to calculate ahead of time what you will do, she can only complete her calculation at precisely the time you do the thing.
One weakness I see in my argument is that the universe might be infinite in such a way that it CAN contain complete copies of itself, each of which would then contain copies of the copies recursively. In this case, Omega contains a copy of the universe and does her calculations. Are we happy to constrain the universe in this way as a matter of generality? Or does saying that Omega only exists in universes which are infinite in such a way as to be able to contain multiple complete copies of themselves present an interesting limit on Omega?
Just to motivate why Omega would need to simulate the whole universe completely, I will decide to one box or two box based on whether a particular small volume of space has more than a certain amount of mass in it or not. The volume of space I will pick is some appropriately small volume so that 1⁄2 the volumes do and 1⁄2 the volumes don’t. The volume of space I will pick is located a distance c*T+ 1 hour from us, where T is the age of the universe. Then my decision depends on a part of the universe which is beyond the sphere of the currently known universe at the time Omega loads the boxes, but which I will be able to observe with a suitable telescope before I have to choose whether to one box or two-box. SO Omega will have to simulate the entire universe over its entire lifetime, or some such similar scale of calculation, in order to predict what I will do.
SO, do we need a timeless decision theory, or do we need to fight the hypothetical?
God f*ing damn it. Again? He has 99.9% accuracy, problem resolved. Every decision remains identical unless a change of 1/1000 in your calculations causes a different action, which in Newcomboid problems it never should.
Note to anyone and everyone who encounters any sort of hypothetical with a “perfect” predictor; if you write it, always state an error rate, and if you read it then assume one (but not one higher than whatever error rate would make a TDT agent choose to two-box.)
The mechanism can just store a reference to itself.
Actually this will not work. Since Omega would be running a simulation of the universe, including a simulation of his own simulation of the universe, the memory space for the simulation of the simulation and for the simulation would need to be distinct as they would not contain the same values as the simulation went on.