>The fourth thing Bostrom says is that we will eventually face other existential risks, and AGI could help prevent them. No argument here, I hope everyone agrees, and that we are fully talking price.
>It is not sufficient to choose the ‘right level of concern about AI’ by turning the dial of progress. If we turn it too far down, we probably get ourselves killed. If we turn it too far up, it might be a long time before we ever build AGI, and we could lose out on a lot of mundane utility, face a declining economy and be vulnerable over time to other existential and catastrophic risks.
I feel that it’s worth pointing out that for almost all X-risks other than AI, while AI could solve them, there are also other ways to solve them that are not in and of themselves X-risks and thus when talking price, only the marginal gain from using AI should be considered.
In particular, your classic “offworld colonies” solve most of the risks. There are two classes of thing where this is not foolproof:
Intelligent adversary. AI itself and aliens fall into this category. Let’s also chuck in divine/simulator intervention. These can’t be blocked by space colonisation at all.
Cases where you need out-of-system colonies to mitigate the risk. These pose a thorny problem because absent ansibles you can’t maintain a Jihad reliably over lightyears. The obvious, albeit hilariously-long-term case here is the Sun burning out, although there are shorter-term risks like somebody making a black hole with particle physics and then punting it into the Sun (which would TTBOMK cause a nova-like event).
Still, your grey-goo problem and your pandemic problem are fixed, which makes the X-risk “price” of not doing AI a lot less than it might look.
>The fourth thing Bostrom says is that we will eventually face other existential risks, and AGI could help prevent them. No argument here, I hope everyone agrees, and that we are fully talking price.
>It is not sufficient to choose the ‘right level of concern about AI’ by turning the dial of progress. If we turn it too far down, we probably get ourselves killed. If we turn it too far up, it might be a long time before we ever build AGI, and we could lose out on a lot of mundane utility, face a declining economy and be vulnerable over time to other existential and catastrophic risks.
I feel that it’s worth pointing out that for almost all X-risks other than AI, while AI could solve them, there are also other ways to solve them that are not in and of themselves X-risks and thus when talking price, only the marginal gain from using AI should be considered.
In particular, your classic “offworld colonies” solve most of the risks. There are two classes of thing where this is not foolproof:
Intelligent adversary. AI itself and aliens fall into this category. Let’s also chuck in divine/simulator intervention. These can’t be blocked by space colonisation at all.
Cases where you need out-of-system colonies to mitigate the risk. These pose a thorny problem because absent ansibles you can’t maintain a Jihad reliably over lightyears. The obvious, albeit hilariously-long-term case here is the Sun burning out, although there are shorter-term risks like somebody making a black hole with particle physics and then punting it into the Sun (which would TTBOMK cause a nova-like event).
Still, your grey-goo problem and your pandemic problem are fixed, which makes the X-risk “price” of not doing AI a lot less than it might look.