People who engage in such goalpost-moving have already written down their bottom line, most likely because AI risk pattern-matches to the literary genre of science fiction. I wouldn’t expect such people to be swayed by any sort empirical evidence short of the development of strong AGI itself. Any arguments they offer against strong AGI amount to little more than rationalization. (Of course, that says nothing about the strengths of the arguments themselves, which must be evaluated on their own merits.)
Most of the arguments I’ve seen against AI risk I’ve seen (in popular media, that is) take the form of arguments against AGI, full-stop. Naturally there exist more nuanced arguments (though personally I’ve yet to see any I find convincing), but I was referring to the arguments made by a specific part of the population, i.e. “people who engage in such goalpost-moving”—and in my (admittedly limited) experience, those sorts of people don’t usually put forth very deep arguments.
In any case I think you have unnecessarily limited yourself to considering viewpoints expressed in media that tend to act as echo chambers. It’s not very interesting or relevant what a bunch of talking heads say with respect to a technical question.
The Time article doesn’t say anything interesting.
Goertzel’s article (the first link you posted) is worth reading, although about half of it doesn’t actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren’t goal-driven.
I updated my earlier comment to say “against AI x-risk positions” which I think is a more accurate description of the arguments. There are others as well, e.g. Andrew Ng, but I think Goertzel does the best job at explaining why the AI x-risk arguments themselves are possibly flawed. They are simplistic in how they model AGIs, and therefore draw simple conclusions that don’t hold up in the real world.
And yes, I think more LW’ers and AI x-risk people should read and respond to Goertzel’s super-intelligence article. I don’t agree with it 100%, but there are some valid points in there. And one doesn’t become effective by only reading viewpoints you agree with...
I wonder if / how that win will affect estimates on the advent of AGI within the AI community.
I’ve already seen some goalpost-moving at Hacker News. I do hope this convinces some people, though.
People who engage in such goalpost-moving have already written down their bottom line, most likely because AI risk pattern-matches to the literary genre of science fiction. I wouldn’t expect such people to be swayed by any sort empirical evidence short of the development of strong AGI itself. Any arguments they offer against strong AGI amount to little more than rationalization. (Of course, that says nothing about the strengths of the arguments themselves, which must be evaluated on their own merits.)
It is entirely possible to firmly believe in the inevitability of near-term AGI without subscribing to AI risk fears. I wouldn’t conflate the two.
Most of the arguments I’ve seen against AI risk I’ve seen (in popular media, that is) take the form of arguments against AGI, full-stop. Naturally there exist more nuanced arguments (though personally I’ve yet to see any I find convincing), but I was referring to the arguments made by a specific part of the population, i.e. “people who engage in such goalpost-moving”—and in my (admittedly limited) experience, those sorts of people don’t usually put forth very deep arguments.
Here’s some arguments against AI x-risk positions from an expert source rather than the popular media:
http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials
http://time.com/3641921/dont-fear-artificial-intelligence/
In any case I think you have unnecessarily limited yourself to considering viewpoints expressed in media that tend to act as echo chambers. It’s not very interesting or relevant what a bunch of talking heads say with respect to a technical question.
The Time article doesn’t say anything interesting.
Goertzel’s article (the first link you posted) is worth reading, although about half of it doesn’t actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren’t goal-driven.
I updated my earlier comment to say “against AI x-risk positions” which I think is a more accurate description of the arguments. There are others as well, e.g. Andrew Ng, but I think Goertzel does the best job at explaining why the AI x-risk arguments themselves are possibly flawed. They are simplistic in how they model AGIs, and therefore draw simple conclusions that don’t hold up in the real world.
And yes, I think more LW’ers and AI x-risk people should read and respond to Goertzel’s super-intelligence article. I don’t agree with it 100%, but there are some valid points in there. And one doesn’t become effective by only reading viewpoints you agree with...