Negligible in terms of calculating the effect of the AI project on existential risk, because the other effects, positive and negative, would be so much larger.
If you’re reducing the expected risk of existential disaster by a larger amount, you’re in expectation net saving lives rather than net killing. If all options involve existential risk, including doing nothing, then all one can do is pick the option with the lowest.
Negligible in terms of calculating the effect of the AI project on existential risk, because the other effects, positive and negative, would be so much larger.
Any other possible effects don’t negate that you’re killing six million people when you’re going ahead with a potentially UnFriendly AI.
If you’re reducing the expected risk of existential disaster by a larger amount, you’re in expectation net saving lives rather than net killing. If all options involve existential risk, including doing nothing, then all one can do is pick the option with the lowest.