I think people have already considered this, but the strategies converge. If someone else is going to make it first, you have only two possibilities: seize control by exerting a strategic advantage, or let them keep control but convince them to make it safe.
To do the former is very difficult, and the little bit of thinking that has been done about it has mostly exhausted the possibilities. To do the latter requires something like
1) giving them the tools to make it safe,
2) doing enough research to convince them to use your tools or fear catastrophe, and
3) opening communications with them.
So far, MIRI and other organizations are focusing on 1 and 2, whereas you’d expect them to primarily do 1 if they expected to get it first. We aren’t doing 3 with respect to China, but that is a step that isn’t easy at the moment and will probably get easier as time goes on.
I am now writing an article where I explore this type of solutions.
One similar to ones you listed is to sell AI Safety as a service, so any other team could hire AI Safety engineers to help align their AI (basically it is a way to combine the tool and the way to deliver it.)
Another (I don’t say it is the best, but possible) is to create as many AI teams in the world as possible, so hard-takeoff will always happen in several teams, and the world will be separated on several domains. Simple calculation shows that we need around 1000 AI teams running simultaneously, to get many fooms. In fact actual number of AI startups, research groups and powerful individual is around 1000 now and growing.
There are also some other ideas, hope to publish draft here on LW next month.
I think people have already considered this, but the strategies converge. If someone else is going to make it first, you have only two possibilities: seize control by exerting a strategic advantage, or let them keep control but convince them to make it safe.
To do the former is very difficult, and the little bit of thinking that has been done about it has mostly exhausted the possibilities. To do the latter requires something like 1) giving them the tools to make it safe, 2) doing enough research to convince them to use your tools or fear catastrophe, and 3) opening communications with them. So far, MIRI and other organizations are focusing on 1 and 2, whereas you’d expect them to primarily do 1 if they expected to get it first. We aren’t doing 3 with respect to China, but that is a step that isn’t easy at the moment and will probably get easier as time goes on.
I am now writing an article where I explore this type of solutions.
One similar to ones you listed is to sell AI Safety as a service, so any other team could hire AI Safety engineers to help align their AI (basically it is a way to combine the tool and the way to deliver it.)
Another (I don’t say it is the best, but possible) is to create as many AI teams in the world as possible, so hard-takeoff will always happen in several teams, and the world will be separated on several domains. Simple calculation shows that we need around 1000 AI teams running simultaneously, to get many fooms. In fact actual number of AI startups, research groups and powerful individual is around 1000 now and growing.
There are also some other ideas, hope to publish draft here on LW next month.