Ben,
You say “raw processing power isn’t the crucial element.” I said that speed “is irrelevant to displaying intelligent thought.” We’re actually saying pretty much the same thing! All I was really trying to argue was that phrases like “the speed of transistors” need to be replaced with phrases like “the accuracy, retention, and flexibility of transistors.” I was -not- trying to argue against the principle that being able to turn the product of a process back on improving that process will result in an exponential growth curve of both intelligence and productivity.
We get plenty of calculation power out of the meat in our brains, but it is unfocused, inaccurate, biased, and forgetful. Performing lots of “flops” is not our weakness. The reason that recursive self-improvement is possible in transistor-based entities has nothing to do with speed—that’s the only point that I’m trying to make.
We should be wary not because the machine can think thoughts faster than we can, but because it can think thoughts -better- than we can.
Eliezer, you’re assuming a very specific type of AI here. There are at least three different types, each with its own challenges: 1.An AI created by clever programmers who grasp the fundamentals of intelligence. 2.An AI evolved in iterative simulations. 3.An AI based on modeling human intelligence, simulating our neural interactions based on future neuroscience.
Type 1 is dangerous because it will interpret whatever instructions literally and has as you say “no ghost.” Type 2 is possibly the most dangerous because we will have no idea how it actually works. There are already experiments that evolve circuits that perform specific tasks but whose actual workings are not understood. In Type 3, we actually can anthropomorphize the AI, but it’s dangerous because the AI is basically a person and has all the problems of a person.
Given current trends it seems to me that slow progress is being made towards Type 2 and Type 3 Type 1 has stymied us for many years.