I agree that only the components that are relevant need to be modeled/simulated.
However, for the Newcomb decision—involving a lot of cognitive work and calls to your utility function—and taking into account the many interconnections between different components of our cognitive architecture, non-trivial parts of yourself would need to be modeled—unlike in your cannonball example, where mass and shape suffice.
For your hypothetical, if knowing you’re human were enough to perfectly predict that particular decision, to ascertain that relationship an initial simulation of the relevant components must have occurred—how else would that belief of Omega’s be justified? I do agree that such a possibility (just one simulation for all of mankind) would lower your belief of just being Omega’s simulation, however since Omega predicts perfectly, if there are any human beings who do not follow that most general rule (e.g. human → 1boxes), the number of simulations would rise again. The possible worlds in which just one simulation suffices should be quite strange, and shouldn’t skew the expected number of needed simulations per human too much.
Let’s take your cannonball example. Can you explain how predicting where a cannonball will land does not involve simulating the relevant components of the cannonball, and the situation it is in? With the simulation requiring higher fidelity the more accurate it has to be. For a perfect simulation, the involved components would need to be perfectly mimicked.
With the simulation requiring higher fidelity the more accurate it has to be. For a perfect simulation, the involved components would need to be perfectly mimicked.
This is false, unless you’re also expecting perfect precision, whatever that means. Omega is looking for a binary answer, so probably doesn’t need much precision at all. It’s like asking if the cannonball will fall east or west of its starting position—you don’t need to model much about it at all to predict its behavior perfectly.
how else would that belief of Omega’s be justified
Nobody claimed that Omega’s beliefs are justified, whatever that means. Omega doesn’t need to have beliefs. Omega just needs to be known to always tell the truth, and to be able to perfectly predict how many boxes you will choose. He could have sprung into existence at the start of the universe with the abilities, for all we know.
If the universe is deterministic, then one can know based on just the starting state of the universe how many boxes you will pick. Omega might be exploiting regularities in physics that have very little to do with the rest of your mind’s computation.
I agree that only the components that are relevant need to be modeled/simulated.
However, for the Newcomb decision—involving a lot of cognitive work and calls to your utility function—and taking into account the many interconnections between different components of our cognitive architecture, non-trivial parts of yourself would need to be modeled—unlike in your cannonball example, where mass and shape suffice.
For your hypothetical, if knowing you’re human were enough to perfectly predict that particular decision, to ascertain that relationship an initial simulation of the relevant components must have occurred—how else would that belief of Omega’s be justified? I do agree that such a possibility (just one simulation for all of mankind) would lower your belief of just being Omega’s simulation, however since Omega predicts perfectly, if there are any human beings who do not follow that most general rule (e.g. human → 1boxes), the number of simulations would rise again. The possible worlds in which just one simulation suffices should be quite strange, and shouldn’t skew the expected number of needed simulations per human too much.
Let’s take your cannonball example. Can you explain how predicting where a cannonball will land does not involve simulating the relevant components of the cannonball, and the situation it is in? With the simulation requiring higher fidelity the more accurate it has to be. For a perfect simulation, the involved components would need to be perfectly mimicked.
This is false, unless you’re also expecting perfect precision, whatever that means. Omega is looking for a binary answer, so probably doesn’t need much precision at all. It’s like asking if the cannonball will fall east or west of its starting position—you don’t need to model much about it at all to predict its behavior perfectly.
Nobody claimed that Omega’s beliefs are justified, whatever that means. Omega doesn’t need to have beliefs. Omega just needs to be known to always tell the truth, and to be able to perfectly predict how many boxes you will choose. He could have sprung into existence at the start of the universe with the abilities, for all we know.
If the universe is deterministic, then one can know based on just the starting state of the universe how many boxes you will pick. Omega might be exploiting regularities in physics that have very little to do with the rest of your mind’s computation.