“I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less.”
I am glad I can agree for once :)
“The main thing I’ll venture into actually expecting from adding “insight” to the mix, is that there’ll be a discontinuity at the point where the AI understands how to do AI theory, the same way that human researchers try to do AI theory. An AI, to swallow its own optimization chain, must not just be able to rewrite its own source code;”
Anyway, my problem with your speculation about hard takeoff is that you seem to do the same conceptual mistake that you so dislike about Cyc—you seem to thing that AI will be mostly “written in the code”.
I suspect it is very likely that the true working AI code will be relatively small and already pretty well optimized. The “mind” itself will be created from it by some self-learning process (my favorite scenario involves weak AI as initial “tutor”) and in fact will be mostly consist of vast amount of classification coeficients and connections or something like that (think bayesian or neural networks).
While it probably will be in AI power to optimize its “primal algorithm”, gains there will be limited (it will be pretty well optimized by humans anyway). The ability to reorganize its “thinking network” might be severely low. Same as with human—we nearly understand how single neuron work, but are far from understanding the whole network. Also, with any further possible self-improvement, the complexity grows further and it is quite reasonable to predict this complexity will grow faster than AI ability to understand it.
I think it all boils down to very simple showstopper—considering you are building perfect simulation, how many atoms you need to simulate a atom? (BTW, this is also showstopper for “nested virtual reality” idea).
Note however that this whole argument is not really mutually exclusive with hard takeoff. AI still can build next generation AI that is better. But “self” part might not work. (BTW, interesting part is that “parent” AI might then face the same dilemma with descendant’s friendliness ;)
I also thing that in all your “foom” posts, you understimate empirical form of knowledge. It sounds like you expect AI to just sit down in the cellar and think, without much inputs and actions, then invent the theory of everything and take over the world.
That is not going to happen at least for the same reason why the endless chain of nested VRs is unlikely.
I think it all boils down to very simple showstopper—considering you are building perfect simulation, how many atoms you need to simulate a atom?
Perfect simulation is not the only means of self-knowledge.
As for empirical knowledge, I’m not sure Eliezer expects an AI to take over the world with no observations/input at all, but he does think that people do far overestimate the amount of observations an effective AI would need.
(Also, for an AI, “building a new AI” and “self-improving” are pretty much the same thing. There isn’t anything magic about “self”. If the AI can write a better AI, it can write a better AI; whether it calls that code “self” or not makes no difference. Granted, it may be somewhat harder for the AI to make sure the new code has the same goal structure if it’s written from scratch, but there’s no particular reason it has to start from scratch.)
“I think it all boils down to very simple showstopper—considering you are building perfect simulation, how many atoms you need to simulate a atom?”
The answer to that question is a blatant “At most, one.” The universe is already shaped like itself.
Yes, but the bound of “at least one” is very likely to be true for lots of purposes also if our understanding of the laws of physics is at all near correct.
“I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less.”
I am glad I can agree for once :)
“The main thing I’ll venture into actually expecting from adding “insight” to the mix, is that there’ll be a discontinuity at the point where the AI understands how to do AI theory, the same way that human researchers try to do AI theory. An AI, to swallow its own optimization chain, must not just be able to rewrite its own source code;”
Anyway, my problem with your speculation about hard takeoff is that you seem to do the same conceptual mistake that you so dislike about Cyc—you seem to thing that AI will be mostly “written in the code”.
I suspect it is very likely that the true working AI code will be relatively small and already pretty well optimized. The “mind” itself will be created from it by some self-learning process (my favorite scenario involves weak AI as initial “tutor”) and in fact will be mostly consist of vast amount of classification coeficients and connections or something like that (think bayesian or neural networks).
While it probably will be in AI power to optimize its “primal algorithm”, gains there will be limited (it will be pretty well optimized by humans anyway). The ability to reorganize its “thinking network” might be severely low. Same as with human—we nearly understand how single neuron work, but are far from understanding the whole network. Also, with any further possible self-improvement, the complexity grows further and it is quite reasonable to predict this complexity will grow faster than AI ability to understand it.
I think it all boils down to very simple showstopper—considering you are building perfect simulation, how many atoms you need to simulate a atom? (BTW, this is also showstopper for “nested virtual reality” idea).
Note however that this whole argument is not really mutually exclusive with hard takeoff. AI still can build next generation AI that is better. But “self” part might not work. (BTW, interesting part is that “parent” AI might then face the same dilemma with descendant’s friendliness ;)
I also thing that in all your “foom” posts, you understimate empirical form of knowledge. It sounds like you expect AI to just sit down in the cellar and think, without much inputs and actions, then invent the theory of everything and take over the world.
That is not going to happen at least for the same reason why the endless chain of nested VRs is unlikely.
Perfect simulation is not the only means of self-knowledge.
As for empirical knowledge, I’m not sure Eliezer expects an AI to take over the world with no observations/input at all, but he does think that people do far overestimate the amount of observations an effective AI would need.
(Also, for an AI, “building a new AI” and “self-improving” are pretty much the same thing. There isn’t anything magic about “self”. If the AI can write a better AI, it can write a better AI; whether it calls that code “self” or not makes no difference. Granted, it may be somewhat harder for the AI to make sure the new code has the same goal structure if it’s written from scratch, but there’s no particular reason it has to start from scratch.)
“I think it all boils down to very simple showstopper—considering you are building perfect simulation, how many atoms you need to simulate a atom?” The answer to that question is a blatant “At most, one.” The universe is already shaped like itself.
Yes, but the bound of “at least one” is very likely to be true for lots of purposes also if our understanding of the laws of physics is at all near correct.