The Time article doesn’t say anything interesting.
Goertzel’s article (the first link you posted) is worth reading, although about half of it doesn’t actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren’t goal-driven.
I updated my earlier comment to say “against AI x-risk positions” which I think is a more accurate description of the arguments. There are others as well, e.g. Andrew Ng, but I think Goertzel does the best job at explaining why the AI x-risk arguments themselves are possibly flawed. They are simplistic in how they model AGIs, and therefore draw simple conclusions that don’t hold up in the real world.
And yes, I think more LW’ers and AI x-risk people should read and respond to Goertzel’s super-intelligence article. I don’t agree with it 100%, but there are some valid points in there. And one doesn’t become effective by only reading viewpoints you agree with...
The Time article doesn’t say anything interesting.
Goertzel’s article (the first link you posted) is worth reading, although about half of it doesn’t actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren’t goal-driven.
I updated my earlier comment to say “against AI x-risk positions” which I think is a more accurate description of the arguments. There are others as well, e.g. Andrew Ng, but I think Goertzel does the best job at explaining why the AI x-risk arguments themselves are possibly flawed. They are simplistic in how they model AGIs, and therefore draw simple conclusions that don’t hold up in the real world.
And yes, I think more LW’ers and AI x-risk people should read and respond to Goertzel’s super-intelligence article. I don’t agree with it 100%, but there are some valid points in there. And one doesn’t become effective by only reading viewpoints you agree with...