Isn’t the risk coming from insufficient AGI alignment relatively small compared to vulnerable world hypthesis? I would expect that even without the invention of AGI or with aligned AGI, it is still possible for us to use some more advanced AI techniques as research assistants that help us invent some kind of smaller/cheaper/easier to use atomic bomb that would destroy the world anyway. Essentially the question is why so much focus on AGI alignment instead of general slowing down of technological progress?
I think this seems quite underexplored. The fact that it is hard to slow down the progress doesn’t mean it isn’t necessary or that this option shouldn’t be researched more.
Here’s why I personally think solving AI alignment is more effective than generally slowing tech progress
If we had aligned AGI and coordinated in using it for the right purposes, we could use it to make the world less vulnerable to other technologies
It’s hard to slow down technological progress in general and easier to steer the development of a single technology, namely AGI
Engineered pandemics and nuclear war are very unlikely to lead to unrecoverable societal collapse if they happen (see this report) whereas AGI seems relatively likely (>1% chance)
Other more dangerous technology (like maybe nano-tech) seems like it will be developed after AGI so it’s only worth worrying about those technologies if we can solve AGI
Isn’t the risk coming from insufficient AGI alignment relatively small compared to vulnerable world hypthesis? I would expect that even without the invention of AGI or with aligned AGI, it is still possible for us to use some more advanced AI techniques as research assistants that help us invent some kind of smaller/cheaper/easier to use atomic bomb that would destroy the world anyway. Essentially the question is why so much focus on AGI alignment instead of general slowing down of technological progress?
I think this seems quite underexplored. The fact that it is hard to slow down the progress doesn’t mean it isn’t necessary or that this option shouldn’t be researched more.
Here’s why I personally think solving AI alignment is more effective than generally slowing tech progress
If we had aligned AGI and coordinated in using it for the right purposes, we could use it to make the world less vulnerable to other technologies
It’s hard to slow down technological progress in general and easier to steer the development of a single technology, namely AGI
Engineered pandemics and nuclear war are very unlikely to lead to unrecoverable societal collapse if they happen (see this report) whereas AGI seems relatively likely (>1% chance)
Other more dangerous technology (like maybe nano-tech) seems like it will be developed after AGI so it’s only worth worrying about those technologies if we can solve AGI