I am also on the hawkish side, and my projections of the future have a fair amount in common with Leopold’s. Our recommendations for what to do are very different however. I am on team ‘don’t build and deploy AGI’ rather than on team ‘build AGI’. I don’t think racing to AGI ensures the safety of liberal democracy, I believe it results in humanity’s destruction. I think that even if we had AGI today, we wouldn’t be able to trust it enough to use it safely without a lot more alignment work. If we trust it too much, we all die. If we don’t trust it, it is not very useful as a tool to help us. Ryan and Buck’s AI control theory helps somewhat with being able to use an untrustworthy AI, but that supposes that the creators will be wise enough to adhere to the careful control plan. I don’t trust that they will. I think they’ll screw up and unleash a demon.
There is a different path available. Seeking an international treaty. Establishing strong enforcement mechanisms, including inspections and constant monitoring of all datacenters and bio labs everywhere in the world. To ensure the safety of humanity we must prevent the development of bioweapons and also prevent the rise of rogue AGI.
If we can’t come to such an arrangement peacefully with all the nations of the world, we must prepare for war. If we fail to enforce the worldwide halt on AI and bio weapons, we all die.
But I don’t think that preparing for war should include racing for AGI. That is a major point where Leopold and I differ in recommendation for current and future action.
I am also on the hawkish side, and my projections of the future have a fair amount in common with Leopold’s. Our recommendations for what to do are very different however. I am on team ‘don’t build and deploy AGI’ rather than on team ‘build AGI’. I don’t think racing to AGI ensures the safety of liberal democracy, I believe it results in humanity’s destruction. I think that even if we had AGI today, we wouldn’t be able to trust it enough to use it safely without a lot more alignment work. If we trust it too much, we all die. If we don’t trust it, it is not very useful as a tool to help us. Ryan and Buck’s AI control theory helps somewhat with being able to use an untrustworthy AI, but that supposes that the creators will be wise enough to adhere to the careful control plan. I don’t trust that they will. I think they’ll screw up and unleash a demon.
There is a different path available. Seeking an international treaty. Establishing strong enforcement mechanisms, including inspections and constant monitoring of all datacenters and bio labs everywhere in the world. To ensure the safety of humanity we must prevent the development of bioweapons and also prevent the rise of rogue AGI.
If we can’t come to such an arrangement peacefully with all the nations of the world, we must prepare for war. If we fail to enforce the worldwide halt on AI and bio weapons, we all die.
But I don’t think that preparing for war should include racing for AGI. That is a major point where Leopold and I differ in recommendation for current and future action.