Robin: Well at long last you finally seem to be laying out the heart of your argument. Dare I hope that we can conclude our discussion by focusing on these issues, or are there yet more layers to this onion?
It takes two people to make a disagreement; I don’t know what the heart of my argument is from your perspective!
This essay treats the simpler and less worrisome case of nanotech. Quickie preview of AI:
When you upgrade to AI there are harder faster cascades because the development idiom is even more recursive, and there is an overhang of hardware capability we don’t understand how to use;
There are probably larger development gaps between projects due to a larger role for insights;
There are more barriers to trade between AIs, because of the differences of cognitive architecture—different AGI projects have far less in common today than nanotech projects, and there is very little sharing of cognitive content even in ordinary AI;
Even if AIs trade improvements among themselves, there’s a huge barrier to applying those improvements to human brains, uncrossable short of very advanced technology for uploading and extreme upgrading;
So even if many unFriendly AI projects are developmentally synchronized and mutually trading, they may come to their own compromise, do a synchronized takeoff, and eat the biosphere; without caring for humanity, humane values, or any sort of existence for themselves that we regard as worthwhile...
But I don’t know if you regard any of that as the important part of the argument, or if the key issue in our disagreement happens to be already displayed here. If it’s here, we should resolve it here, because nanotech is much easier to understand.
It takes two people to make a disagreement; I don’t know what the heart of my argument is from your perspective!
This essay treats the simpler and less worrisome case of nanotech. Quickie preview of AI:
When you upgrade to AI there are harder faster cascades because the development idiom is even more recursive, and there is an overhang of hardware capability we don’t understand how to use;
There are probably larger development gaps between projects due to a larger role for insights;
There are more barriers to trade between AIs, because of the differences of cognitive architecture—different AGI projects have far less in common today than nanotech projects, and there is very little sharing of cognitive content even in ordinary AI;
Even if AIs trade improvements among themselves, there’s a huge barrier to applying those improvements to human brains, uncrossable short of very advanced technology for uploading and extreme upgrading;
So even if many unFriendly AI projects are developmentally synchronized and mutually trading, they may come to their own compromise, do a synchronized takeoff, and eat the biosphere; without caring for humanity, humane values, or any sort of existence for themselves that we regard as worthwhile...
But I don’t know if you regard any of that as the important part of the argument, or if the key issue in our disagreement happens to be already displayed here. If it’s here, we should resolve it here, because nanotech is much easier to understand.