My proposal would roughly be that the US government (in collaboration with allies etc) enforces no one building AI which are qualitatively smarter than humans and this should be the default plan.
(This might be doable without government support via coordination between multiple labs, but I basically doubt it.)
Their could be multiple AI projects backed by the US+allies or just one, either could be workable in principle, though multiple seems tricky.
TBC, I don’t think there are plausible alternatives to at least some US government involvement which don’t require commiting a bunch of massive crimes.
I have a policy against commiting or recommending commiting massive crimes.
My proposal would roughly be that the US government (in collaboration with allies etc) enforces no one building AI which are qualitatively smarter than humans and this should be the default plan.
(This might be doable without government support via coordination between multiple labs, but I basically doubt it.)
Their could be multiple AI projects backed by the US+allies or just one, either could be workable in principle, though multiple seems tricky.
TBC, I don’t think there are plausible alternatives to at least some US government involvement which don’t require commiting a bunch of massive crimes.
I have a policy against commiting or recommending commiting massive crimes.