You write “Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?”
I don’t know what argument Eliezer would’ve been using to reach that conclusion, but it’s the kind of conclusion people typically reach if they do a Fermi estimate. E.g., take some bit of nervous tissue whose function seems to be pretty well understood, like the early visual preprocessing (edge detection, motion detection...) in the retina. Now estimate how much it would cost to build conventional silicon computer hardware performing the same operations; then scale the estimated cost of the brain in proportion to the ratio of volume of nervous tissue.
See http://boingboing.net/2009/02/10/hans-moravecs-slide.html for the conclusion of one popular version of this kind of analysis. I’m pretty sure that the analysis behind that slide is in at least one of Moravec’s books (where the slide, or something similar to it, appears as an illustration), but I don’t know offhand which book.
The analysis could be grossly wrong if the foundations are wrong, perhaps because key neurons are doing much more than we think. E.g., if some kind of neuron is storing a huge number of memory bits per neuron (which I doubt: admittedly there is no fundamental reason I know of that this couldn’t be true, but there’s also no evidence for it that I know of) or if neurons are doing quantum calculation (which seems exceedingly unlikely to me; and it is also unclear that quantum calculation can even help much with general intelligence, as opposed to helping with a few special classes of problems related to number theory). I don’t know any particularly likely for way the foundations to be grossly wrong, though, so the conclusions seem pretty reasonable to me.
Note also that suitably specialized computer hardware tends to have something like an order of magnitude better price/performance than the general-purpose computer systems which appear on the graph. (E.g., it is much more cost-effective to render computer graphics using a specialized graphics board, rather than using software running on a general-purpose computer board.)
I find this line of argument pretty convincing, so I think it’s a pretty good bet that given the software, current technology could build human-comparable AI hardware in quantity 100 for less than a million dollars per AI; and that if the figure isn’t yet as low as one hundred thousand dollars per AI, it will be that low very soon.
You write “Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?”
I don’t know what argument Eliezer would’ve been using to reach that conclusion, but it’s the kind of conclusion people typically reach if they do a Fermi estimate. E.g., take some bit of nervous tissue whose function seems to be pretty well understood, like the early visual preprocessing (edge detection, motion detection...) in the retina. Now estimate how much it would cost to build conventional silicon computer hardware performing the same operations; then scale the estimated cost of the brain in proportion to the ratio of volume of nervous tissue.
See http://boingboing.net/2009/02/10/hans-moravecs-slide.html for the conclusion of one popular version of this kind of analysis. I’m pretty sure that the analysis behind that slide is in at least one of Moravec’s books (where the slide, or something similar to it, appears as an illustration), but I don’t know offhand which book.
The analysis could be grossly wrong if the foundations are wrong, perhaps because key neurons are doing much more than we think. E.g., if some kind of neuron is storing a huge number of memory bits per neuron (which I doubt: admittedly there is no fundamental reason I know of that this couldn’t be true, but there’s also no evidence for it that I know of) or if neurons are doing quantum calculation (which seems exceedingly unlikely to me; and it is also unclear that quantum calculation can even help much with general intelligence, as opposed to helping with a few special classes of problems related to number theory). I don’t know any particularly likely for way the foundations to be grossly wrong, though, so the conclusions seem pretty reasonable to me.
Note also that suitably specialized computer hardware tends to have something like an order of magnitude better price/performance than the general-purpose computer systems which appear on the graph. (E.g., it is much more cost-effective to render computer graphics using a specialized graphics board, rather than using software running on a general-purpose computer board.)
I find this line of argument pretty convincing, so I think it’s a pretty good bet that given the software, current technology could build human-comparable AI hardware in quantity 100 for less than a million dollars per AI; and that if the figure isn’t yet as low as one hundred thousand dollars per AI, it will be that low very soon.
Thanks. I’m not sure how much complexity is added by the dendrites making new connections.