The hypothesis here is that if you are unsure whether AGI is safe, it’s not, and when
you are sure it is, it’s still probably not.
I really didn’t get that impression… Why worry about whether the AI will separate humanity if you think it might fail anyway. Surely spend more time making sure it doesn’t fail...
I really didn’t get that impression… Why worry about whether the AI will separate humanity if you think it might fail anyway. Surely spend more time making sure it doesn’t fail...