Most of the arguments I’ve seen against AI risk I’ve seen (in popular media, that is) take the form of arguments against AGI, full-stop. Naturally there exist more nuanced arguments (though personally I’ve yet to see any I find convincing), but I was referring to the arguments made by a specific part of the population, i.e. “people who engage in such goalpost-moving”—and in my (admittedly limited) experience, those sorts of people don’t usually put forth very deep arguments.
In any case I think you have unnecessarily limited yourself to considering viewpoints expressed in media that tend to act as echo chambers. It’s not very interesting or relevant what a bunch of talking heads say with respect to a technical question.
The Time article doesn’t say anything interesting.
Goertzel’s article (the first link you posted) is worth reading, although about half of it doesn’t actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren’t goal-driven.
I updated my earlier comment to say “against AI x-risk positions” which I think is a more accurate description of the arguments. There are others as well, e.g. Andrew Ng, but I think Goertzel does the best job at explaining why the AI x-risk arguments themselves are possibly flawed. They are simplistic in how they model AGIs, and therefore draw simple conclusions that don’t hold up in the real world.
And yes, I think more LW’ers and AI x-risk people should read and respond to Goertzel’s super-intelligence article. I don’t agree with it 100%, but there are some valid points in there. And one doesn’t become effective by only reading viewpoints you agree with...
Most of the arguments I’ve seen against AI risk I’ve seen (in popular media, that is) take the form of arguments against AGI, full-stop. Naturally there exist more nuanced arguments (though personally I’ve yet to see any I find convincing), but I was referring to the arguments made by a specific part of the population, i.e. “people who engage in such goalpost-moving”—and in my (admittedly limited) experience, those sorts of people don’t usually put forth very deep arguments.
Here’s some arguments against AI x-risk positions from an expert source rather than the popular media:
http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials
http://time.com/3641921/dont-fear-artificial-intelligence/
In any case I think you have unnecessarily limited yourself to considering viewpoints expressed in media that tend to act as echo chambers. It’s not very interesting or relevant what a bunch of talking heads say with respect to a technical question.
The Time article doesn’t say anything interesting.
Goertzel’s article (the first link you posted) is worth reading, although about half of it doesn’t actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren’t goal-driven.
I updated my earlier comment to say “against AI x-risk positions” which I think is a more accurate description of the arguments. There are others as well, e.g. Andrew Ng, but I think Goertzel does the best job at explaining why the AI x-risk arguments themselves are possibly flawed. They are simplistic in how they model AGIs, and therefore draw simple conclusions that don’t hold up in the real world.
And yes, I think more LW’ers and AI x-risk people should read and respond to Goertzel’s super-intelligence article. I don’t agree with it 100%, but there are some valid points in there. And one doesn’t become effective by only reading viewpoints you agree with...