Given your probabilities, I think you are under appreciating the magnitude of the downside risks.
I spent some time trying to dig into why we need to worry so much about downside risks of false positives (thinking we’re going to get aligned AI when we’re not) and how deep the problem goes, but most of the relevant bits of argumentation I would make to convince you to worry are right at the top of the post.
Given your probabilities, I think you are under appreciating the magnitude of the downside risks.
I spent some time trying to dig into why we need to worry so much about downside risks of false positives (thinking we’re going to get aligned AI when we’re not) and how deep the problem goes, but most of the relevant bits of argumentation I would make to convince you to worry are right at the top of the post.