I like Goertzel’s succinct explanation of the idea behind Moore’s Law of Mad Science:
...as technology advances, it is possible for people to create more and more destruction using less and less money, education and intelligence.
Also, his succinct explanation of why Friendly AI is so hard:
The practical realization of [Friendly AI] seems likely to require astounding breakthroughs in mathematics and science — whereas it seems plausible that human-level AI, molecular assemblers and the synthesis of novel organisms can be achieved via a series of moderate-level breakthroughs alternating with ‘normal science and engineering.’
Another choice quote that succinctly makes a key point I find myself making all the time:
if the US stopped developing AI, synthetic biology and nanotech next year, China and Russia would most likely interpret this as a fantastic economic and political opportunity, rather than as an example to be imitated.
His proposal for Nanny AI, however, appears to be FAI-complete.
Also, it is strange that despite paragraphs like this:
we haven’t needed an AI Nanny so far, because we haven’t had sufficiently powerful and destructive technologies. And now, these same technologies that may necessitate the creation of an AI Nanny, also may provide the means of creating it.
I like Goertzel’s succinct explanation of the idea behind Moore’s Law of Mad Science:
Also, his succinct explanation of why Friendly AI is so hard:
Another choice quote that succinctly makes a key point I find myself making all the time:
His proposal for Nanny AI, however, appears to be FAI-complete.
Also, it is strange that despite paragraphs like this:
...he does not anywhere cite Bostrom (2004).
It’s a very different idea from Yudkowsky’s “CEV” proposal.
It’s reasonable to think that a nanny-like machine might be easier to build that other kinds—because a nanny’s job description is rather limited.