...which almost suggests to me that maybe the optimal political policy to advocate is for things that reduce the likelihood and scope of prosaic disasters …
Am I reading my results wrong?
No, I think you are reading them right. If your projection places the AI singularity more than a few decades out (given business-as-usual), then some other serious civilization-collapsing disaster is likely to arise before the FAI arrives to save us.
But, the scenario that most frightens me is that the near-prospect of an FAI might itself be the trigger for a collapse—due to something like the ‘Artilect War’ of de Garis.
De Garis has luddites on one side of his war. That group has historically been impoverised and has lacked power. The government may well just declare them to be undesirable terrorists—and stomp on them—if they start causing trouble.
We can see the environmental movement today. They are usually fairly peace-loving. It doesn’t seem terribly likely to me that their descendants will go into battle.
No, I think you are reading them right. If your projection places the AI singularity more than a few decades out (given business-as-usual), then some other serious civilization-collapsing disaster is likely to arise before the FAI arrives to save us.
But, the scenario that most frightens me is that the near-prospect of an FAI might itself be the trigger for a collapse—due to something like the ‘Artilect War’ of de Garis.
De Garis has luddites on one side of his war. That group has historically been impoverised and has lacked power. The government may well just declare them to be undesirable terrorists—and stomp on them—if they start causing trouble.
We can see the environmental movement today. They are usually fairly peace-loving. It doesn’t seem terribly likely to me that their descendants will go into battle.