Here’s why I personally think solving AI alignment is more effective than generally slowing tech progress
If we had aligned AGI and coordinated in using it for the right purposes, we could use it to make the world less vulnerable to other technologies
It’s hard to slow down technological progress in general and easier to steer the development of a single technology, namely AGI
Engineered pandemics and nuclear war are very unlikely to lead to unrecoverable societal collapse if they happen (see this report) whereas AGI seems relatively likely (>1% chance)
Other more dangerous technology (like maybe nano-tech) seems like it will be developed after AGI so it’s only worth worrying about those technologies if we can solve AGI
Here’s why I personally think solving AI alignment is more effective than generally slowing tech progress
If we had aligned AGI and coordinated in using it for the right purposes, we could use it to make the world less vulnerable to other technologies
It’s hard to slow down technological progress in general and easier to steer the development of a single technology, namely AGI
Engineered pandemics and nuclear war are very unlikely to lead to unrecoverable societal collapse if they happen (see this report) whereas AGI seems relatively likely (>1% chance)
Other more dangerous technology (like maybe nano-tech) seems like it will be developed after AGI so it’s only worth worrying about those technologies if we can solve AGI