Responding to the supposed difference between the cases:
Omega puts the million in the box or not before the game has begun, depending on your former disposition to one-box or two-box.
Then the game begins. You are considering whether to one-box or two-box. Then the choice to one-box or two-box is intrinsically harmless; it merely happens to be correlated with your previous disposition and with Omega’s choice. Likewise, your present disposition to one-box or two-box is also intrinsically harmless. It is merely correlated with your previous disposition and with Omega’s choice.
You can no more change your previous disposition than you can change whether you have the lesion, so the two cases are equivalent.
And if people’s actions are deterministic, then in theory there could be an Omega that is 100% accurate. Nor would there be a need for simulation; as cousin_it has pointed out, it could “analyze your source code” and come up with a proof that you will one-box or two-box. In this case the 100% correlated smoking lesion and Newcomb would be precisely equivalent. The same is true if each has a 90% correlation, and so on.
Nor would there be a need for simulation; as cousin_it has pointed out, it could “analyze your source code” and come up with a proof that you will one-box or two-box.
If some subset of the information contained within you is sufficient to prove what you will do, simulating that subset is a relevant simulation of you.
I’m not sure what kind of proof you could do without going through the steps such that you essentially produced a simulation.
Could you give an example of the type of proof you’re proposing, so I can judge for myself whether it seems to involve running through the relevant steps?
Many programs can be proven to have a certain result without any simulation, not even of a subset of the information. For example, think of a program that discovers the first 10,000 primes, increasing a counter by one for each prime it finds, and then stops. You can prove that the counter will equal 10,000 when it stops, without simulating this program.
See, to me that is a mental simulation of the relevant part of the program.
The counter will increase, point by point, it will remain an integer at each point and pass through every integer, and upon reaching 10,000 this will happen.
The fact that the relevant part of the program is as ridiculously simple as a counter just means that the simulation is easy.
Responding to the supposed difference between the cases:
Omega puts the million in the box or not before the game has begun, depending on your former disposition to one-box or two-box.
Then the game begins. You are considering whether to one-box or two-box. Then the choice to one-box or two-box is intrinsically harmless; it merely happens to be correlated with your previous disposition and with Omega’s choice. Likewise, your present disposition to one-box or two-box is also intrinsically harmless. It is merely correlated with your previous disposition and with Omega’s choice.
You can no more change your previous disposition than you can change whether you have the lesion, so the two cases are equivalent.
And if people’s actions are deterministic, then in theory there could be an Omega that is 100% accurate. Nor would there be a need for simulation; as cousin_it has pointed out, it could “analyze your source code” and come up with a proof that you will one-box or two-box. In this case the 100% correlated smoking lesion and Newcomb would be precisely equivalent. The same is true if each has a 90% correlation, and so on.
If some subset of the information contained within you is sufficient to prove what you will do, simulating that subset is a relevant simulation of you.
I’m not sure what kind of proof you could do without going through the steps such that you essentially produced a simulation.
Could you give an example of the type of proof you’re proposing, so I can judge for myself whether it seems to involve running through the relevant steps?
See cousin_it’s post: http://lesswrong.com/lw/2ip/ai_cooperation_in_practice/
Many programs can be proven to have a certain result without any simulation, not even of a subset of the information. For example, think of a program that discovers the first 10,000 primes, increasing a counter by one for each prime it finds, and then stops. You can prove that the counter will equal 10,000 when it stops, without simulating this program.
See, to me that is a mental simulation of the relevant part of the program.
The counter will increase, point by point, it will remain an integer at each point and pass through every integer, and upon reaching 10,000 this will happen.
The fact that the relevant part of the program is as ridiculously simple as a counter just means that the simulation is easy.