I don’t think lack of hardware progress is a major problem in avoiding existential disaster.
I read your post, but I don’t see a reason that a lack of understanding for certain past events should bring us to devalue our current best estimates for ways to reduce danger. I wouldn’t be remotely surprised if there are dangers we don’t (yet?) understand, but why presume an unknown danger isn’t localized in the same areas as known dangers? Keep in mind that reversed stupidity is not intelligence.
Because it has empirically turned out not to be. Reversed stupidity is not intelligence, but it is avoidance of stupidity. When we know a particular source gives wrong answers, that doesn’t tell us the right answers, but it does tell us what to avoid.
I don’t think lack of hardware progress is a major problem in avoiding existential disaster.
I read your post, but I don’t see a reason that a lack of understanding for certain past events should bring us to devalue our current best estimates for ways to reduce danger. I wouldn’t be remotely surprised if there are dangers we don’t (yet?) understand, but why presume an unknown danger isn’t localized in the same areas as known dangers? Keep in mind that reversed stupidity is not intelligence.
Because it has empirically turned out not to be. Reversed stupidity is not intelligence, but it is avoidance of stupidity. When we know a particular source gives wrong answers, that doesn’t tell us the right answers, but it does tell us what to avoid.