This post seems about 90% correct, and written better than your previous posts.
I expect nanotech will be more important, someday, than you admit. But I agree that it’s unlikely to be relevant to foom. GPUs are close enough to nanotech that speeding up GPU production is likely more practical than switching to nanotech.
I suspect Eliezer believes AI could speed up GPU production dramatically without nanotech. Can someone who believes that explain why they think recent GPU progress has been far from optimal?
Just preceding the 6 OOM claim, EY provides a different naive technical argument as to why he is confident that it is possible to create a mind more powerful than the human brain using much less compute:
I don’t see anything naive about the argument that you quoted here (which doesn’t say how much less compute). Long, fast chains of serial computation enable some algorithms that are hard to implement on brains. So it seems obvious that such systems will have some better-than-human abilities.
Eliezer doesn’t seem naive there until he jumps to implying a 6 OOM advantage on tasks that matter. He would be correct here if there are serial algorithms that improve a lot on the algorithms that matter most for human intelligence. It’s not too hard to imagine that evolution overlooked such serial algorithms.
Recent patterns in computing are decent evidence that key human pattern-recognition algorithms can’t be made much more efficient. That seems to justify maybe 80% confidence that Eliezer is wrong here. My best guess is that Eliezer focuses too much on algorithms where humans are weak.
This post seems about 90% correct, and written better than your previous posts.
I expect nanotech will be more important, someday, than you admit. But I agree that it’s unlikely to be relevant to foom. GPUs are close enough to nanotech that speeding up GPU production is likely more practical than switching to nanotech.
I suspect Eliezer believes AI could speed up GPU production dramatically without nanotech. Can someone who believes that explain why they think recent GPU progress has been far from optimal?
I don’t see anything naive about the argument that you quoted here (which doesn’t say how much less compute). Long, fast chains of serial computation enable some algorithms that are hard to implement on brains. So it seems obvious that such systems will have some better-than-human abilities.
Eliezer doesn’t seem naive there until he jumps to implying a 6 OOM advantage on tasks that matter. He would be correct here if there are serial algorithms that improve a lot on the algorithms that matter most for human intelligence. It’s not too hard to imagine that evolution overlooked such serial algorithms.
Recent patterns in computing are decent evidence that key human pattern-recognition algorithms can’t be made much more efficient. That seems to justify maybe 80% confidence that Eliezer is wrong here. My best guess is that Eliezer focuses too much on algorithms where humans are weak.