I’d like to have a real conversation about whether AI is a risk for human extinction. Honestly, I don’t get how AI poses this risk. What are your thoughts? And, who do you think has a thoughtful perspective on how AI poses this risk that I should talk to?
In the attached video, he states that he respects many of the people who signed the letter a lot, and will reach out to people whom he thinks have a thoughtful perspective. But he is also interested in further suggestions for whom to talk to.
Given that Andrew Ng is one of the top AI scientists in the world, it seems valuable for someone to think of a way to connect to him.
Andrew Ng wants to have a conversation about extinction risk from AI
Link post
Andrew Ng writes:
In the attached video, he states that he respects many of the people who signed the letter a lot, and will reach out to people whom he thinks have a thoughtful perspective. But he is also interested in further suggestions for whom to talk to.
Given that Andrew Ng is one of the top AI scientists in the world, it seems valuable for someone to think of a way to connect to him.