To predict your reaction perfectly, someone has to go through your source code on some computational substrate—which means executing it, if only in their “mind”.
Certainly not true in all possible worlds. For example, it could be that for some strange reason humans always 1-box when encountering Newcomb’s problem. Then, knowing you’re a human is sufficient to predict that you will 1-box.
Also to illustrate, you can see where a cannonball will land without simulating the cannonball.
you can see where a cannonball will land without simulating the cannonball.
To predict with any degree of accuracy where a cannonball will land, I’m going to need to know the muzzle velocity, angle, and elevation of the cannon, and then I’m going to need to mathematically simulate the cannon firing. If I want to be more confident or more accurate, I’m also going to need to know the shape, size, and mass of the cannonball; and the current weather conditions; and I’m going to need to simulate the cannon’s firing in more detail.
If I wanted to predict anything about a chaotic system, like the color of an arbitrary pixel in a gigapixel rendering of the Mandelbrot Set, I’d need to do a much finer-grained simulation—even if I’m just looking for a yes/no answer.
To get an answer from a particular decision theory, Omega is going to have to do the functional equivalent of lying to that decision theory—tracing its execution path along a particular branch which corresponds to a statement from Omega that is not veridical. I don’t think we can say whether that simulation is detailed enough to be consciously aware of the lie, but I don’t think that’s what’s being asked.
To predict with any degree of accuracy where a cannonball will land, I’m going to need to know the muzzle velocity, angle, and elevation of the cannon, and then I’m going to need to mathematically simulate the cannon firing.
No, you really don’t. LCPW please. The cannonball is flying unrestricted through the air in an eastwardly direction and will impact a giant tub of jello.
I agree that only the components that are relevant need to be modeled/simulated.
However, for the Newcomb decision—involving a lot of cognitive work and calls to your utility function—and taking into account the many interconnections between different components of our cognitive architecture, non-trivial parts of yourself would need to be modeled—unlike in your cannonball example, where mass and shape suffice.
For your hypothetical, if knowing you’re human were enough to perfectly predict that particular decision, to ascertain that relationship an initial simulation of the relevant components must have occurred—how else would that belief of Omega’s be justified? I do agree that such a possibility (just one simulation for all of mankind) would lower your belief of just being Omega’s simulation, however since Omega predicts perfectly, if there are any human beings who do not follow that most general rule (e.g. human → 1boxes), the number of simulations would rise again. The possible worlds in which just one simulation suffices should be quite strange, and shouldn’t skew the expected number of needed simulations per human too much.
Let’s take your cannonball example. Can you explain how predicting where a cannonball will land does not involve simulating the relevant components of the cannonball, and the situation it is in? With the simulation requiring higher fidelity the more accurate it has to be. For a perfect simulation, the involved components would need to be perfectly mimicked.
With the simulation requiring higher fidelity the more accurate it has to be. For a perfect simulation, the involved components would need to be perfectly mimicked.
This is false, unless you’re also expecting perfect precision, whatever that means. Omega is looking for a binary answer, so probably doesn’t need much precision at all. It’s like asking if the cannonball will fall east or west of its starting position—you don’t need to model much about it at all to predict its behavior perfectly.
how else would that belief of Omega’s be justified
Nobody claimed that Omega’s beliefs are justified, whatever that means. Omega doesn’t need to have beliefs. Omega just needs to be known to always tell the truth, and to be able to perfectly predict how many boxes you will choose. He could have sprung into existence at the start of the universe with the abilities, for all we know.
If the universe is deterministic, then one can know based on just the starting state of the universe how many boxes you will pick. Omega might be exploiting regularities in physics that have very little to do with the rest of your mind’s computation.
Certainly not true in all possible worlds. For example, it could be that for some strange reason humans always 1-box when encountering Newcomb’s problem. Then, knowing you’re a human is sufficient to predict that you will 1-box.
Also to illustrate, you can see where a cannonball will land without simulating the cannonball.
To predict with any degree of accuracy where a cannonball will land, I’m going to need to know the muzzle velocity, angle, and elevation of the cannon, and then I’m going to need to mathematically simulate the cannon firing. If I want to be more confident or more accurate, I’m also going to need to know the shape, size, and mass of the cannonball; and the current weather conditions; and I’m going to need to simulate the cannon’s firing in more detail.
If I wanted to predict anything about a chaotic system, like the color of an arbitrary pixel in a gigapixel rendering of the Mandelbrot Set, I’d need to do a much finer-grained simulation—even if I’m just looking for a yes/no answer.
To get an answer from a particular decision theory, Omega is going to have to do the functional equivalent of lying to that decision theory—tracing its execution path along a particular branch which corresponds to a statement from Omega that is not veridical. I don’t think we can say whether that simulation is detailed enough to be consciously aware of the lie, but I don’t think that’s what’s being asked.
No, you really don’t. LCPW please. The cannonball is flying unrestricted through the air in an eastwardly direction and will impact a giant tub of jello.
I agree that only the components that are relevant need to be modeled/simulated.
However, for the Newcomb decision—involving a lot of cognitive work and calls to your utility function—and taking into account the many interconnections between different components of our cognitive architecture, non-trivial parts of yourself would need to be modeled—unlike in your cannonball example, where mass and shape suffice.
For your hypothetical, if knowing you’re human were enough to perfectly predict that particular decision, to ascertain that relationship an initial simulation of the relevant components must have occurred—how else would that belief of Omega’s be justified? I do agree that such a possibility (just one simulation for all of mankind) would lower your belief of just being Omega’s simulation, however since Omega predicts perfectly, if there are any human beings who do not follow that most general rule (e.g. human → 1boxes), the number of simulations would rise again. The possible worlds in which just one simulation suffices should be quite strange, and shouldn’t skew the expected number of needed simulations per human too much.
Let’s take your cannonball example. Can you explain how predicting where a cannonball will land does not involve simulating the relevant components of the cannonball, and the situation it is in? With the simulation requiring higher fidelity the more accurate it has to be. For a perfect simulation, the involved components would need to be perfectly mimicked.
This is false, unless you’re also expecting perfect precision, whatever that means. Omega is looking for a binary answer, so probably doesn’t need much precision at all. It’s like asking if the cannonball will fall east or west of its starting position—you don’t need to model much about it at all to predict its behavior perfectly.
Nobody claimed that Omega’s beliefs are justified, whatever that means. Omega doesn’t need to have beliefs. Omega just needs to be known to always tell the truth, and to be able to perfectly predict how many boxes you will choose. He could have sprung into existence at the start of the universe with the abilities, for all we know.
If the universe is deterministic, then one can know based on just the starting state of the universe how many boxes you will pick. Omega might be exploiting regularities in physics that have very little to do with the rest of your mind’s computation.