Updated the post with excerpts from the MIT Technology Review video interview, where Hinton among other things brings up convergent instrumental goals (“And if you give something the ability to create its own sub-goals in order to achieve other goals, I think it’ll very quickly realise that getting more control is a very good sub-goal because it helps you achieve other goals. And if these things get carried away with getting more control, we’re in trouble”) and explicitly says x-risk from AI may be close (“So I think if you take the existential risk seriously, as I now do, I used to think it was way off, but I now think it’s serious and fairly close. It might be quite sensible to just stop developing these things any further. But I think it’s completely naive to think that would happen.”)
Updated the post with excerpts from the MIT Technology Review video interview, where Hinton among other things brings up convergent instrumental goals (“And if you give something the ability to create its own sub-goals in order to achieve other goals, I think it’ll very quickly realise that getting more control is a very good sub-goal because it helps you achieve other goals. And if these things get carried away with getting more control, we’re in trouble”) and explicitly says x-risk from AI may be close (“So I think if you take the existential risk seriously, as I now do, I used to think it was way off, but I now think it’s serious and fairly close. It might be quite sensible to just stop developing these things any further. But I think it’s completely naive to think that would happen.”)