But if you then read a conversation between a “mainstream” AI researcher and an SIAI researcher and the former can’t explain why the latter is wrong then you better start updating.
His arguments were not worse than Luke’s arguments if you ignore all the links, which he has no reason to read. He said that he does not believe that it is possible to restrict an AI the way that SI does imagine and still produce a general intelligence. He believes that the most promising route is AI that can learn by being taught.
In combination with his doubts about uncontrollable superintelligence, that position is not incoherent. You can also not claim, given this short dialogue, that he did not explain why SI is wrong.
But you are vastly overestimating how much thought scientists and engineers put into broad, philosophical concerns involving their fields.
That’s not what I was referring to. I doubt they have thought a lot about AI risks. What I meant is that they have likely thought about the possibility of recursive self-improvement and uncontrollable superhuman intelligence.
If an AI researcher tells you that he believes that AI risks are not a serious issue because they do not believe that AI can get out of control for technical reasons and you reply that they have not thought about AI drives and the philosophical reasons for why superhuman AI will pose a risk, then you created a straw man. Which is the usual tactic employed here.
His arguments were not worse than Luke’s arguments if you ignore all the links, which he has no reason to read. He said that he does not believe that it is possible to restrict an AI the way that SI does imagine and still produce a general intelligence. He believes that the most promising route is AI that can learn by being taught.
In combination with his doubts about uncontrollable superintelligence, that position is not incoherent. You can also not claim, given this short dialogue, that he did not explain why SI is wrong.
That’s not what I was referring to. I doubt they have thought a lot about AI risks. What I meant is that they have likely thought about the possibility of recursive self-improvement and uncontrollable superhuman intelligence.
If an AI researcher tells you that he believes that AI risks are not a serious issue because they do not believe that AI can get out of control for technical reasons and you reply that they have not thought about AI drives and the philosophical reasons for why superhuman AI will pose a risk, then you created a straw man. Which is the usual tactic employed here.