Speaking of things to be worried about other than AI, I wonder if a biotech disaster is a more urgent problem, even if less comprehensive
Part of what I’m assuming is that developing a self-amplifying AI is so hard that biotech could be well-developed first.
While it doesn’t seem likely to me that a bio-tech disaster could wipe out the human race, it could cause huge damage—I’m imagining diseases aimed at monoculture crops, or plagues as the result of terrorism or incompetent experiments.
My other assumptions are that FAI research is dependent on a wealthy, secure society with a good bit of surplus wealth for individual projects, and is likely to be highly dependent on a small number of specific people for the forseeable future.
On the other hand, FAI is at least a relatively well-defined project. I’m not sure where you’d start to prevent biotech disasters.
Speaking of things to be worried about other than AI, I wonder if a biotech disaster is a more urgent problem, even if less comprehensive
Part of what I’m assuming is that developing a self-amplifying AI is so hard that biotech could be well-developed first.
While it doesn’t seem likely to me that a bio-tech disaster could wipe out the human race, it could cause huge damage—I’m imagining diseases aimed at monoculture crops, or plagues as the result of terrorism or incompetent experiments.
My other assumptions are that FAI research is dependent on a wealthy, secure society with a good bit of surplus wealth for individual projects, and is likely to be highly dependent on a small number of specific people for the forseeable future.
On the other hand, FAI is at least a relatively well-defined project. I’m not sure where you’d start to prevent biotech disasters.
That’s one hell of a “relatively” you’ve got there!