I believe that AI safety is a real issue. There are both near term and long term issues.
I believe that the version of AI safety that will get traction is regulatory capture.
I believe that the AI safety community is too focused on what fascinating technology can do, and not enough on the human part of the equation.
On Andrew Ng, his point is that he doesn’t see how exactly AI is realistically going to kill all of us. Without a concrete argument that is worth responding to, what can he really say? I disagree with him on this, I do think that there are realistic scenarios to worry about. But I do agree with him on what is happening politically with AI safety.
I believe that AI safety is a real issue. There are both near term and long term issues.
I believe that the version of AI safety that will get traction is regulatory capture.
I believe that the AI safety community is too focused on what fascinating technology can do, and not enough on the human part of the equation.
On Andrew Ng, his point is that he doesn’t see how exactly AI is realistically going to kill all of us. Without a concrete argument that is worth responding to, what can he really say? I disagree with him on this, I do think that there are realistic scenarios to worry about. But I do agree with him on what is happening politically with AI safety.
Thanks, yes, sadly seems all very plausible to me too.