That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.
I’m not sure why that stretches your credibility. Note for example, that computability results often tell us not to try something. Thus for example, the Turing Halting Theorem and related results mean that we know we can’t make a program that will in general tell if any arbitrary program will crash.
Similarly, theorems about the asymptotic ability of certain algorithms matters. A strong version of P != NP would have direct implications about AIs trying to go FOOM. Similarly, if trapdoor function or one-way functions exist they give us possible security procedures with handling young general AI.
I’m mainly talking about solomonoff induction here. Especially when Eliezer uses it as part of his argument about what we can expect from Super Intelligences. Or searching through 3^^^3 proofs without blinking an eye.
The point in the linked post doesn’t deal substantially with the limits of arbitrarily large computers. It is just an intuition pump for the idea that a fast moderately bright intelligence could be dangerous.
Is it a good intuition pump? To me it is like using a TM as an intuition pump about how much memory we might have in the future.
We will never have anywhere near infinite memory. We will have a lot more than what we have at the moment, but the concept of the TM is not useful in gauging the scope and magnitude.
I’m trying to find the other post that annoyed me in this fashion. Something to do with simulating universes.
I’m not sure why that stretches your credibility. Note for example, that computability results often tell us not to try something. Thus for example, the Turing Halting Theorem and related results mean that we know we can’t make a program that will in general tell if any arbitrary program will crash.
Similarly, theorems about the asymptotic ability of certain algorithms matters. A strong version of P != NP would have direct implications about AIs trying to go FOOM. Similarly, if trapdoor function or one-way functions exist they give us possible security procedures with handling young general AI.
I’m mainly talking about solomonoff induction here. Especially when Eliezer uses it as part of his argument about what we can expect from Super Intelligences. Or searching through 3^^^3 proofs without blinking an eye.
The point in the linked post doesn’t deal substantially with the limits of arbitrarily large computers. It is just an intuition pump for the idea that a fast moderately bright intelligence could be dangerous.
Is it a good intuition pump? To me it is like using a TM as an intuition pump about how much memory we might have in the future.
We will never have anywhere near infinite memory. We will have a lot more than what we have at the moment, but the concept of the TM is not useful in gauging the scope and magnitude.
I’m trying to find the other post that annoyed me in this fashion. Something to do with simulating universes.