Potential negative consequences [3] of slowing down research on artificial intelligence (a risks and benefits analysis).
(3) Could being overcautious be itself an existential risk that might significantly outweigh the risk(s) posed by the subject of caution? Suppose that most civilizations err on the side of caution. This might cause them to either evolve much slower so that the chance of a fatal natural disaster to occur before sufficient technology is developed to survive it, rises to 100%, or stops them from evolving at all for being unable to prove something being 100% safe before trying it and thus never taking the necessary steps to become less vulnerable to naturally existing existential risks. Further reading: Why safety is not safe
I was thinking about how the existential risks affect each other—for example, a real world war might either destroy so much that high tech risks become less likely for a while, or lead to research which results in high tech disaster.
And we may get home build-a-virus kits before AI is developed, even if we aren’t cautious about AI.
I added a footnote to the post:
Potential negative consequences [3] of slowing down research on artificial intelligence (a risks and benefits analysis).
I was thinking about how the existential risks affect each other—for example, a real world war might either destroy so much that high tech risks become less likely for a while, or lead to research which results in high tech disaster.
And we may get home build-a-virus kits before AI is developed, even if we aren’t cautious about AI.