The problem this consensus position is that it failed to imagine that several deadly pandemics could run simultaneously, and existential terrorists could deliberately organize it by manipulating several viruses. Rather simple AI may help to engineer deadly plagues in droves, and it should not be superintelligent to do so.
Personally, I see the big failure of all x-risks community in ignoring and not even discussing such risks.
Perhaps have any bioprinter, or other tool, be constantly connected to a narrow AI, to make sure it doesn’t accidentally, or intentionally , print ANY viruses, bacteria, or prions.
Jump ASAP to friendly AI or to another global control system, may be using many interconnected narrow AIs as AI police. Basically, if we don’t create global control system, we are doomed. But it may be decentralised to escape the worst side of the totalitarianism.
Regarding FAI research it is a catch-22. If we slow down AI research effectively, biorisks will start to dominate. If we accelerate AI, we more likely create it before AI safety theory implementation is ready.
I could send anyone interested my article about the biorisks and all this, which I don’t want to publish openly on the internet, hoping for some journal publication.
The problem this consensus position is that it failed to imagine that several deadly pandemics could run simultaneously, and existential terrorists could deliberately organize it by manipulating several viruses. Rather simple AI may help to engineer deadly plagues in droves, and it should not be superintelligent to do so.
Personally, I see the big failure of all x-risks community in ignoring and not even discussing such risks.
Is there anything we can realistically do about it? Without crippling the whole of biotech?
Perhaps have any bioprinter, or other tool, be constantly connected to a narrow AI, to make sure it doesn’t accidentally, or intentionally , print ANY viruses, bacteria, or prions.
Jump ASAP to friendly AI or to another global control system, may be using many interconnected narrow AIs as AI police. Basically, if we don’t create global control system, we are doomed. But it may be decentralised to escape the worst side of the totalitarianism.
Regarding FAI research it is a catch-22. If we slow down AI research effectively, biorisks will start to dominate. If we accelerate AI, we more likely create it before AI safety theory implementation is ready.
I could send anyone interested my article about the biorisks and all this, which I don’t want to publish openly on the internet, hoping for some journal publication.