Imagine a hypothetical world in which there was an algorithm that could do everything that the human brain does better, and with a millionth of the compute.
Based on the evidence at hand (as summarized in this article) - we probably don’t live in that world. The burden of proof is on you to show otherwise.
But in those hypothetical worlds, AGI would come earlier, probably well before the end phase of Moore’s Law.
I was using that as a hypothetical example to show that your definitions were bad. (In particular, the attempt to define arithmetic as not AI because computers were so much better at it.)
I also don’t think that you have significant evidence that we don’t live in this world, beyond the observation that if such an algorithm exists, it is sufficiently non-obvious that neither evolution or humans have found it so far.
A lot of the article is claiming the brain is thermodynamically efficient at turning energy into compute. The rest is comparing the brain to existing deep learning techniques.
I admit that I have little evidence that such an algorithm does exist, so its largely down to priors.
also don’t think that you have significant evidence that we don’t live in this world, beyond the observation that if such an algorithm exists, it is sufficiently non-obvious that neither evolution or humans have found it so far.
FWIW, I totally think that mental savants like Ramanujan (or “ordinary” geniuses like von Neumann) make a super-strong case for the existence of “algorithms evolution knows not”.
(Yes, they were humans, and were therefore running on the same evolutionary hardware as everybody else. But I don’t think it makes sense to credit their remarkable achievements to the hardware evolution produced; indeed, it seems almost certain that they were using that same hardware to run a better algorithm, producing much better results with the same amount of compute—or possibly less, in Ramanujan’s case!)
Based on the evidence at hand (as summarized in this article) - we probably don’t live in that world. The burden of proof is on you to show otherwise.
But in those hypothetical worlds, AGI would come earlier, probably well before the end phase of Moore’s Law.
I was using that as a hypothetical example to show that your definitions were bad. (In particular, the attempt to define arithmetic as not AI because computers were so much better at it.)
I also don’t think that you have significant evidence that we don’t live in this world, beyond the observation that if such an algorithm exists, it is sufficiently non-obvious that neither evolution or humans have found it so far.
A lot of the article is claiming the brain is thermodynamically efficient at turning energy into compute. The rest is comparing the brain to existing deep learning techniques.
I admit that I have little evidence that such an algorithm does exist, so its largely down to priors.
FWIW, I totally think that mental savants like Ramanujan (or “ordinary” geniuses like von Neumann) make a super-strong case for the existence of “algorithms evolution knows not”.
(Yes, they were humans, and were therefore running on the same evolutionary hardware as everybody else. But I don’t think it makes sense to credit their remarkable achievements to the hardware evolution produced; indeed, it seems almost certain that they were using that same hardware to run a better algorithm, producing much better results with the same amount of compute—or possibly less, in Ramanujan’s case!)