I think it’s important for the ‘Pause AI’ movement (which I support) to help politicians, voter, and policy wonks understand that ‘power to do good’ is not necessarily correlated with ‘power to deter harm’ or the ‘power to do indiscriminate harm’. So, advocating for caution (‘OMG AI is really dangerous!‘) should not be read as ‘power to do good’ or ‘power to deter harm’—which could incentivize gov’ts to pursue AI despite the risks.
For example, nuclear weapons can’t really do much good (except maybe for blasting incoming asteroids), but have some power to deter use of nuclear weapons by others, but also have a lot of power to do indiscriminate harm (e.g. global thermonuclear war).
Whereas engineered pandemic viruses would have virtually no power to do good, and no power to deter harm, and only offer power to do indiscriminate harm (e.g. global pandemic).
Arguably, ASI might have a LOT more power to do indiscriminate harm than power to deter harm or power to do good.
If we can convince policy-makers that this is a reasonable viewpoint (ASI offers mostly indiscriminate harm, not good or deterrence), then it might be easier to achieve a helpful pause, and also to reduce the chance of an AI arms race.
Tamsin—interesting points.
I think it’s important for the ‘Pause AI’ movement (which I support) to help politicians, voter, and policy wonks understand that ‘power to do good’ is not necessarily correlated with ‘power to deter harm’ or the ‘power to do indiscriminate harm’. So, advocating for caution (‘OMG AI is really dangerous!‘) should not be read as ‘power to do good’ or ‘power to deter harm’—which could incentivize gov’ts to pursue AI despite the risks.
For example, nuclear weapons can’t really do much good (except maybe for blasting incoming asteroids), but have some power to deter use of nuclear weapons by others, but also have a lot of power to do indiscriminate harm (e.g. global thermonuclear war).
Whereas engineered pandemic viruses would have virtually no power to do good, and no power to deter harm, and only offer power to do indiscriminate harm (e.g. global pandemic).
Arguably, ASI might have a LOT more power to do indiscriminate harm than power to deter harm or power to do good.
If we can convince policy-makers that this is a reasonable viewpoint (ASI offers mostly indiscriminate harm, not good or deterrence), then it might be easier to achieve a helpful pause, and also to reduce the chance of an AI arms race.