First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies.
As was mentioned in other threads, SIAI’s main arguments rely on disjunctions and antipredictions more than conjunctions and predictions. That is, if several technology scenarios lead to the same broad outcome, that’s a much stronger claim than one very detailed scenario.
For instance, the claim that AI presents a special category of existential risk is supported by such a disjunction. There are several technologies today which we know would be very dangerous with the right clever ‘recipe’– we can make simple molecular nanotech machines, we can engineer custom viruses, we can hack into some very sensitive or essential computer systems, etc. What these all imply is that a much smarter agent with a lot of computing power is a severe existential threat if it chooses to be.
As was mentioned in other threads, SIAI’s main arguments rely on disjunctions and antipredictions more than conjunctions and predictions. That is, if several technology scenarios lead to the same broad outcome, that’s a much stronger claim than one very detailed scenario.
For instance, the claim that AI presents a special category of existential risk is supported by such a disjunction. There are several technologies today which we know would be very dangerous with the right clever ‘recipe’– we can make simple molecular nanotech machines, we can engineer custom viruses, we can hack into some very sensitive or essential computer systems, etc. What these all imply is that a much smarter agent with a lot of computing power is a severe existential threat if it chooses to be.