Roko, Ben thought he could do it in a few years, and still thinks so now. I was not working with Ben on AI, then or now, and I didn’t think I could do it in a few years, then or now. I made mistakes in my wild and reckless youth but that was not one of them.
[Correction: Moshe Looks points out that in 1996, “Staring into the Singularity”, I claimed that it ought to be possible to get to the Singularity by 2005, which I thought I would have a reasonable chance of doing given a hundred million dollars per year. This claim was for brute-forcing AI via Manhattan Project, before I had any concept of Friendly AI. And I do think that Ben Goertzel generally sounds a bit more optimistic and reassuring about his AI project getting to general intelligence in on the order of five years given decent funding. Nonetheless, the statement above is wrong. Apparently this statement was so out of character for my modern self that I simply have no memory of ever making it, an interesting but not surprising observation—there’s a reason I talk about Eliezer_1996 like he was a different person. It should also be mentioned that I do assess a thought-worthy chance of AI showing up in five years, though probably not Friendly. But this doesn’t reflect the problem being easy, it reflects me trying to widen my confidence intervals.]
Zubon, the thought has tormented me for quite a while that if scientific progress continued at exactly the current rate, then it probably wouldn’t be more than 100 years before Friendly AI was a six-month project for one grad student. But you see, those six months are not the hard part of the work. That’s never the really hard part of the work. Scientific progress is the really fricking hard part of the work. But this is rarely appreciated, because most people don’t work on that, and only apply existing techniques—that’s their only referent for “hard” or “easy”, and scientific progress isn’t a thought that occurs to them, really. Which also goes for the majority of AGI wannabes—they think in terms of hard or easy techniques to apply, just like they think in terms of cheap or expensive hardware; the notion of hard or easy scientific problems-of-understanding to solve, does not appear anywhere on their gameboard. Scientific problems are either already solved, or clearly much too difficult for anyone to solve; so we’ll have to deal with the problem using a technique we already understand, or an understandable technology that seems to be progressing, like whole brain emulation or parallel programming.
These are not the important things, and they are not the gap that separates you from the imaginary grad student of 100 years hence. That gap is made out of mysteries, and you cross it by dissolving them.
Peter, human brains are somewhat unstable even operating in ancestral parameters. Yes, you run into a different class of problems with uploading. And unlike FAI, there is a nonzero chance of full success even if you don’t use exact math for everything. But there are still problems.
Roko, Ben thought he could do it in a few years, and still thinks so now. I was not working with Ben on AI, then or now, and I didn’t think I could do it in a few years, then or now. I made mistakes in my wild and reckless youth but that was not one of them.
[Correction: Moshe Looks points out that in 1996, “Staring into the Singularity”, I claimed that it ought to be possible to get to the Singularity by 2005, which I thought I would have a reasonable chance of doing given a hundred million dollars per year. This claim was for brute-forcing AI via Manhattan Project, before I had any concept of Friendly AI. And I do think that Ben Goertzel generally sounds a bit more optimistic and reassuring about his AI project getting to general intelligence in on the order of five years given decent funding. Nonetheless, the statement above is wrong. Apparently this statement was so out of character for my modern self that I simply have no memory of ever making it, an interesting but not surprising observation—there’s a reason I talk about Eliezer_1996 like he was a different person. It should also be mentioned that I do assess a thought-worthy chance of AI showing up in five years, though probably not Friendly. But this doesn’t reflect the problem being easy, it reflects me trying to widen my confidence intervals.]
Zubon, the thought has tormented me for quite a while that if scientific progress continued at exactly the current rate, then it probably wouldn’t be more than 100 years before Friendly AI was a six-month project for one grad student. But you see, those six months are not the hard part of the work. That’s never the really hard part of the work. Scientific progress is the really fricking hard part of the work. But this is rarely appreciated, because most people don’t work on that, and only apply existing techniques—that’s their only referent for “hard” or “easy”, and scientific progress isn’t a thought that occurs to them, really. Which also goes for the majority of AGI wannabes—they think in terms of hard or easy techniques to apply, just like they think in terms of cheap or expensive hardware; the notion of hard or easy scientific problems-of-understanding to solve, does not appear anywhere on their gameboard. Scientific problems are either already solved, or clearly much too difficult for anyone to solve; so we’ll have to deal with the problem using a technique we already understand, or an understandable technology that seems to be progressing, like whole brain emulation or parallel programming.
These are not the important things, and they are not the gap that separates you from the imaginary grad student of 100 years hence. That gap is made out of mysteries, and you cross it by dissolving them.
Peter, human brains are somewhat unstable even operating in ancestral parameters. Yes, you run into a different class of problems with uploading. And unlike FAI, there is a nonzero chance of full success even if you don’t use exact math for everything. But there are still problems.