One may try the following conjecture: Synthetic biology is so simple and AI is so complex, that risks of extinction from artificial viruses are far earlier in time. Even if both risks have the same probability individually, the one that comes first gets biggest part of total probability.
For example, lets Pv = 0.9 is the risks of viruses in the absence of any other risks, and Pai = 0.9 is the risk of AI in absence of any viruses. But Pv may happened in the first half of 21 century, and Pai in the second. In this case we have total probability of extinction =0.99, from which 0.9 comes from viruses, and 0.09 comes cent from AI.
If it true, than promoting AI as the main existential risk is misallocation of resources.
If we look closer in Pv and Pai, we may find that Pv is exponentially increasing in time because of Moore law in biotech, while Pai describes one time event and is constant. AI will be friendly or not. (It also may have more complex time depending, but here I just estimate the probability that FAI theory will be created and implemented.)
And assuming that AI is the only mean to stop creating dangerous viruses (it may be untrue but for the sake of the argument we will suppose it) when we need AI as earlier as possible, even if it will have smaller chances to be friendly.
So, this line of reasoning suggests that AI is not risk because its benefits will outweigh its risks if we look in larger picture.
Personally I think that we must invest in creating Safe AI, but we need to do it as soon as possible.
Update: the same logic may be applied in different efforts in AI field. AI do not need to be able to self-improve to cause human extinction. It may be just 200 IQ Stuxnet that hacks critical infrastructure. Such AI may appear before real self-improving AI, as the latter must be based on non self-improving but clever AI. So our efforts to prevent dangerous non-self-improving AIs may be more urgent, and all work on Godelian agents is misallocation of resources.
A 200 IQ Stuxnet is a self improving AGI. Anything that has a real IQ is an AGI and if it’s smarter than human researchers on the subject it can self-improve.
It may not use its technical ability to self-improve to kill all humans. It may also limit it self to low level self- improvement aka learning. Self-improvement is not necessary condition for UFAI. But it may be one of its instruments.
One may try the following conjecture: Synthetic biology is so simple and AI is so complex, that risks of extinction from artificial viruses are far earlier in time. Even if both risks have the same probability individually, the one that comes first gets biggest part of total probability.
For example, lets Pv = 0.9 is the risks of viruses in the absence of any other risks, and Pai = 0.9 is the risk of AI in absence of any viruses. But Pv may happened in the first half of 21 century, and Pai in the second. In this case we have total probability of extinction =0.99, from which 0.9 comes from viruses, and 0.09 comes cent from AI.
If it true, than promoting AI as the main existential risk is misallocation of resources.
If we look closer in Pv and Pai, we may find that Pv is exponentially increasing in time because of Moore law in biotech, while Pai describes one time event and is constant. AI will be friendly or not. (It also may have more complex time depending, but here I just estimate the probability that FAI theory will be created and implemented.)
And assuming that AI is the only mean to stop creating dangerous viruses (it may be untrue but for the sake of the argument we will suppose it) when we need AI as earlier as possible, even if it will have smaller chances to be friendly.
So, this line of reasoning suggests that AI is not risk because its benefits will outweigh its risks if we look in larger picture.
Personally I think that we must invest in creating Safe AI, but we need to do it as soon as possible.
Update: the same logic may be applied in different efforts in AI field. AI do not need to be able to self-improve to cause human extinction. It may be just 200 IQ Stuxnet that hacks critical infrastructure. Such AI may appear before real self-improving AI, as the latter must be based on non self-improving but clever AI. So our efforts to prevent dangerous non-self-improving AIs may be more urgent, and all work on Godelian agents is misallocation of resources.
A 200 IQ Stuxnet is a self improving AGI. Anything that has a real IQ is an AGI and if it’s smarter than human researchers on the subject it can self-improve.
It may not use its technical ability to self-improve to kill all humans. It may also limit it self to low level self- improvement aka learning. Self-improvement is not necessary condition for UFAI. But it may be one of its instruments.