Goertzel’s article seems basically reasonable to me. There were some mis-statements that I can excuse at the very end, because by that point part of his argument was that certain kinds of hyperbole came up over and over and his text was mimicing the form of the hyperbolic arguments even as it criticized them. The grandmother line and IQ obsessed aliens spring to mind :-P
Given his summary of the “Scary AGI Thesis”...
If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
...it seemed like it would make sense to track down past discussions here where our discussions may have been implicitly shaped by the thesis. Here are two articles where the issue of concrete programming projects came up, spawning interesting discussions that seemed to have the Scary Thesis as a subtext:
In June 2009, cousin_it wrote Let’s reimplement EURISKO!, and some of the discussion got into AGI direction meta-strategy. The highest top level comment is Eliezer bringing up issues of caution.
In January 2010, StuartArmstrong wrote Advice for AI makers and again Eliezer brings up caution to massive approval. This one is particularly interesting because Wei_Dai has a +20 child comment off of that talking about Goertzel’s company webmind… and the anthropic argument.
At the same time, in the course of searching, the “other side” also came up, which I think speaks well for the community :-)
Three days after the Eurisko article was posted, rwallace wrote Why safety is not safe which discussed the issue in the context of (1) historical patterns of competition versus historical patterns of politically managed non-innovation and (2) the fact that the “human trajectory” simply doesn’t appear to be long term stable such that swift innovation may be the only thing that prevents a sort of “default outcome” of human extinction.
Of course, even earlier, Eliezer was talking about the general subject of novel research as something that can prevent or cause tragedy, as with the July 2008 article Should We Ban Physics? (although he did his normal thing with an off-handed claim that it was basically impossible to actually prevent innovation).
Goertzel’s article seems basically reasonable to me. There were some mis-statements that I can excuse at the very end, because by that point part of his argument was that certain kinds of hyperbole came up over and over and his text was mimicing the form of the hyperbolic arguments even as it criticized them. The grandmother line and IQ obsessed aliens spring to mind :-P
Given his summary of the “Scary AGI Thesis”...
...it seemed like it would make sense to track down past discussions here where our discussions may have been implicitly shaped by the thesis. Here are two articles where the issue of concrete programming projects came up, spawning interesting discussions that seemed to have the Scary Thesis as a subtext:
In June 2009, cousin_it wrote Let’s reimplement EURISKO!, and some of the discussion got into AGI direction meta-strategy. The highest top level comment is Eliezer bringing up issues of caution.
In January 2010, StuartArmstrong wrote Advice for AI makers and again Eliezer brings up caution to massive approval. This one is particularly interesting because Wei_Dai has a +20 child comment off of that talking about Goertzel’s company webmind… and the anthropic argument.
At the same time, in the course of searching, the “other side” also came up, which I think speaks well for the community :-)
Three days after the Eurisko article was posted, rwallace wrote Why safety is not safe which discussed the issue in the context of (1) historical patterns of competition versus historical patterns of politically managed non-innovation and (2) the fact that the “human trajectory” simply doesn’t appear to be long term stable such that swift innovation may be the only thing that prevents a sort of “default outcome” of human extinction.
Of course, even earlier, Eliezer was talking about the general subject of novel research as something that can prevent or cause tragedy, as with the July 2008 article Should We Ban Physics? (although he did his normal thing with an off-handed claim that it was basically impossible to actually prevent innovation).