The point is you never achieve 100% safety no matter what, so the correct way to approach it is to reduce risk most given whatever resources you have. This is exactly what Eleizer says SI is doing:
I have an analysis of the problem which says that if I want something to have a failure probability less than 1, I have to do certain things because I haven’t yet thought of any way not to have to do them.
IOW, they thought about it and concluded there’s no other way.
Is their approach the best possible one? I don’t know, probably not. But it’s a lot better than “let’s just build something and hope for the best”.
Edit: Is that analysis public? I’d be interested in that, probably many people would.
I’m not suggesting “let’s just build something and hope for the best.” Rather, we should pursue a few strategies at once: Both FAI theory, as well stopgap security measures. Also, education of other researchers.
The point is you never achieve 100% safety no matter what, so the correct way to approach it is to reduce risk most given whatever resources you have. This is exactly what Eleizer says SI is doing:
IOW, they thought about it and concluded there’s no other way. Is their approach the best possible one? I don’t know, probably not. But it’s a lot better than “let’s just build something and hope for the best”.
Edit: Is that analysis public? I’d be interested in that, probably many people would.
I’m not suggesting “let’s just build something and hope for the best.” Rather, we should pursue a few strategies at once: Both FAI theory, as well stopgap security measures. Also, education of other researchers.