Though if human-level AI is very fary away, I think there might be better things to do now than work on very direct safety measures.
Agreed. That is the meaning I intended by
estimates comparing this against the value of other existential risk reduction efforts would be needed to determine this [i.e. whether effort might be better used elsewhere]
Agreed. That is the meaning I intended by