Well, he’s right that intentionally evil AI is highly unlikely to be created:
Malevolent AI would need all these capabilities, and then some. Both an intent to do something and an understanding of human goals, motivations, and behaviors would be keys to being evil towards humans.
which happens to be the exact reason why Friendly AI is difficult. He doesn’t directly address things that don’t care about humans, like paperclip maximizers, but some of his arguments can be applied to them.
Expecting more computation to just magically get to intentional intelligences, who understand the world is similarly unlikely.
He’s totally right that AGI with intentionality is an extremely difficult problem. We haven’t created anything that is even close to practically approximating Solomonoff induction across a variety of situations, and Solomonoff induction is insufficient for the kind of intentionality you would need to build something that cares about universe states while being able to model the universe in a flexible manner. But, you can throw more computation power at a lot of problems to get better solutions, and I expect approximate Solomonoff induction to become practical in limited ways as computation power increases and moderate algorithmic improvements are made. This is true partially because greater computation power allows one to search for better algorithms.
I do agree with him that human-level AGI within the next few decades is unlikely and that significantly slowing down AI research is probably not a good idea right now.
Well, he’s right that intentionally evil AI is highly unlikely to be created:
which happens to be the exact reason why Friendly AI is difficult. He doesn’t directly address things that don’t care about humans, like paperclip maximizers, but some of his arguments can be applied to them.
He’s totally right that AGI with intentionality is an extremely difficult problem. We haven’t created anything that is even close to practically approximating Solomonoff induction across a variety of situations, and Solomonoff induction is insufficient for the kind of intentionality you would need to build something that cares about universe states while being able to model the universe in a flexible manner. But, you can throw more computation power at a lot of problems to get better solutions, and I expect approximate Solomonoff induction to become practical in limited ways as computation power increases and moderate algorithmic improvements are made. This is true partially because greater computation power allows one to search for better algorithms.
I do agree with him that human-level AGI within the next few decades is unlikely and that significantly slowing down AI research is probably not a good idea right now.