I agree with the general sentiment. Though if human-level AI is very far away, I think there might be better things to do now than work on very direct safety measures. For instance, improve society’s general mechanisms for dealing with existential risks, or get more information about what’s going to happen and how to best prepare. I’m not sure if you meant to include these kinds of things.
Though if human-level AI is very fary away, I think there might be better things to do now than work on very direct safety measures.
Agreed. That is the meaning I intended by
estimates comparing this against the value of other existential risk reduction efforts would be needed to determine this [i.e. whether effort might be better used elsewhere]
I agree with the general sentiment. Though if human-level AI is very far away, I think there might be better things to do now than work on very direct safety measures. For instance, improve society’s general mechanisms for dealing with existential risks, or get more information about what’s going to happen and how to best prepare. I’m not sure if you meant to include these kinds of things.
Agreed. That is the meaning I intended by