That is incorrect. If I tell you to add up the numbers from 1 to 100 and you start counting, I know by a completely different algorithm that you’re going to get 5050. This generalizes: Omega need only prove that the output of your algorithm is the same as the output of a simpler algorithm (without, I may note, running it), and run that instead.
Omega cannot do this in general. Given an arbitrary algorithm with some asymptotic complexity, there is no general procedure that can get the same result faster.
Computational complexity puts limits even on superintelligences.
I don’t think the speed is essential to my argument, though. The point is that it’s possible to determine the output of the algorithm that is you, without running that algorithm.
In general, no. To predict the output of an arbitrary algorithm, you have to have equivalent states to the algorithm. If I give you a Turing machine and ask you what its output is, you can’t do any better than running it.
You can do various trivial transforms to it and say “no, I’m running a new machine”, but it’s as expensive as running the original and will have to be effectively isomorphic to it, I suspect.
I don’t have a proof of that claim, either, just a strong intuition. I should have specified that more clearly. If some other LWer has a proof in mind, I’d love to see it.
Here are some related things I do have proofs for.
There’s no general procedure for figuring out whether two programs do the same thing—this follows from Rice’s theorem. (In fact, this is undecidable for arbitrary context-free languages, let alone more general programs.)
For any given problem, there is some set of asymptotically optimal algorithms, in terms of space or time complexity. And a simulator can’t improve on that bound by more than a constant factor. So if you have an optimal algorithm for something, no AI can improve on that.
Now suppose I give you an arbitrary program and promise that there’s a more efficient program that produces the same output. There can’t be a general way to find the optimal program that produces the answer.*
Proof by reduction from the halting problem: Suppose we had a an oracle for minimizing programs. For an arbitrary Turing machine T and input I, create a program P that ignores its input and simulates T on I. If we could minimize this program, we’d have a halting oracle.)
Oh. That makes sense (since you can’t even tell if the machine will halt or not). I’m still not convinced it applies to free will, but I will not argue the point further since I agree with the conclusion anyway.
That’s true for simple cases, yes, and that’s why I added “usually” in “there is (usually) no way to know the future from the past without simulating the present”.
But if you have an algorithm able to produce exactly the same output than I would (say exactly the same things, including talks about free will and consciousness) from the same inputs, then it’ll have the same amount of consciousness and free will than I do—or you believe in zombies.
True, but I think you’re making a bigger deal of that, than it is. Suppose our Omega is the one from Newcomb’s problem, and all it wants to know is whether you’ll one-box or two-box. It doesn’t need to run an algorithm that produces the same output as you in all instances. It needs to determine one specific bit of the output you will produce in a specific state S. There is a good chance that a quick scan of your algorithm is enough to figure this out, without needing to simulate anything at all.
The reason this is a big deal is that “free will” means two things to us. On the one hand, it’s this philosophical concept. On the other hand, we think of having free will in opposition to being manipulated and coerced into doing something. These are obviously related. But just because we have free will in the philosophical sense, doesn’t mean that we have free will in the second sense. So it’s important to keep these as separate as possible.
Because Omega can totally play you like a fiddle, you know.
That is incorrect. If I tell you to add up the numbers from 1 to 100 and you start counting, I know by a completely different algorithm that you’re going to get 5050. This generalizes: Omega need only prove that the output of your algorithm is the same as the output of a simpler algorithm (without, I may note, running it), and run that instead.
Omega cannot do this in general. Given an arbitrary algorithm with some asymptotic complexity, there is no general procedure that can get the same result faster.
Computational complexity puts limits even on superintelligences.
I don’t think the speed is essential to my argument, though. The point is that it’s possible to determine the output of the algorithm that is you, without running that algorithm.
In general, no. To predict the output of an arbitrary algorithm, you have to have equivalent states to the algorithm. If I give you a Turing machine and ask you what its output is, you can’t do any better than running it.
You can do various trivial transforms to it and say “no, I’m running a new machine”, but it’s as expensive as running the original and will have to be effectively isomorphic to it, I suspect.
I agree.
But if you’re saying that it’s proven that “you have to …”, I wasn’t aware of that.
I don’t have a proof of that claim, either, just a strong intuition. I should have specified that more clearly. If some other LWer has a proof in mind, I’d love to see it.
Here are some related things I do have proofs for.
There’s no general procedure for figuring out whether two programs do the same thing—this follows from Rice’s theorem. (In fact, this is undecidable for arbitrary context-free languages, let alone more general programs.)
For any given problem, there is some set of asymptotically optimal algorithms, in terms of space or time complexity. And a simulator can’t improve on that bound by more than a constant factor. So if you have an optimal algorithm for something, no AI can improve on that.
Now suppose I give you an arbitrary program and promise that there’s a more efficient program that produces the same output. There can’t be a general way to find the optimal program that produces the answer.*
Proof by reduction from the halting problem: Suppose we had a an oracle for minimizing programs. For an arbitrary Turing machine T and input I, create a program P that ignores its input and simulates T on I. If we could minimize this program, we’d have a halting oracle.)
When you say “optimal” you mean space or time used.
When you say “minimize” you mean “shortest equivalent program”?
I love this list of undecidable problems.
I meant to minimize in terms of asympototic time or space complexity, not length-of-program.
Oh. That makes sense (since you can’t even tell if the machine will halt or not). I’m still not convinced it applies to free will, but I will not argue the point further since I agree with the conclusion anyway.
That’s true for simple cases, yes, and that’s why I added “usually” in “there is (usually) no way to know the future from the past without simulating the present”.
But if you have an algorithm able to produce exactly the same output than I would (say exactly the same things, including talks about free will and consciousness) from the same inputs, then it’ll have the same amount of consciousness and free will than I do—or you believe in zombies.
True, but I think you’re making a bigger deal of that, than it is. Suppose our Omega is the one from Newcomb’s problem, and all it wants to know is whether you’ll one-box or two-box. It doesn’t need to run an algorithm that produces the same output as you in all instances. It needs to determine one specific bit of the output you will produce in a specific state S. There is a good chance that a quick scan of your algorithm is enough to figure this out, without needing to simulate anything at all.
The reason this is a big deal is that “free will” means two things to us. On the one hand, it’s this philosophical concept. On the other hand, we think of having free will in opposition to being manipulated and coerced into doing something. These are obviously related. But just because we have free will in the philosophical sense, doesn’t mean that we have free will in the second sense. So it’s important to keep these as separate as possible.
Because Omega can totally play you like a fiddle, you know.