That’s true for simple cases, yes, and that’s why I added “usually” in “there is (usually) no way to know the future from the past without simulating the present”.
But if you have an algorithm able to produce exactly the same output than I would (say exactly the same things, including talks about free will and consciousness) from the same inputs, then it’ll have the same amount of consciousness and free will than I do—or you believe in zombies.
True, but I think you’re making a bigger deal of that, than it is. Suppose our Omega is the one from Newcomb’s problem, and all it wants to know is whether you’ll one-box or two-box. It doesn’t need to run an algorithm that produces the same output as you in all instances. It needs to determine one specific bit of the output you will produce in a specific state S. There is a good chance that a quick scan of your algorithm is enough to figure this out, without needing to simulate anything at all.
The reason this is a big deal is that “free will” means two things to us. On the one hand, it’s this philosophical concept. On the other hand, we think of having free will in opposition to being manipulated and coerced into doing something. These are obviously related. But just because we have free will in the philosophical sense, doesn’t mean that we have free will in the second sense. So it’s important to keep these as separate as possible.
Because Omega can totally play you like a fiddle, you know.
That’s true for simple cases, yes, and that’s why I added “usually” in “there is (usually) no way to know the future from the past without simulating the present”.
But if you have an algorithm able to produce exactly the same output than I would (say exactly the same things, including talks about free will and consciousness) from the same inputs, then it’ll have the same amount of consciousness and free will than I do—or you believe in zombies.
True, but I think you’re making a bigger deal of that, than it is. Suppose our Omega is the one from Newcomb’s problem, and all it wants to know is whether you’ll one-box or two-box. It doesn’t need to run an algorithm that produces the same output as you in all instances. It needs to determine one specific bit of the output you will produce in a specific state S. There is a good chance that a quick scan of your algorithm is enough to figure this out, without needing to simulate anything at all.
The reason this is a big deal is that “free will” means two things to us. On the one hand, it’s this philosophical concept. On the other hand, we think of having free will in opposition to being manipulated and coerced into doing something. These are obviously related. But just because we have free will in the philosophical sense, doesn’t mean that we have free will in the second sense. So it’s important to keep these as separate as possible.
Because Omega can totally play you like a fiddle, you know.