I’m in the camp of “slowing down uncontrolled AI research means starting a world war because it means dropping ICBMs on the data centers of all uncooperative foreign nations (in unprovoked unwarned-of preemptive strikes so they don’t have time to hide their computers) and leaning hard on the cooperative ones to allow placement of <preemptive-striking-government>’s armed guards and full ownership/control of <preemptive-striking-government> over all their public & private data centers.” I still think it would potentially be a wise move at this point given the risks, but I don’t see it as being realistically on the table as a political option. That’s just.… a lot.
As long as I was imagining totally unfeasible solutions, I’d rather imagine a nice one like… the UN calls an emergency meeting, and everyone agrees to cooperate and not do unapproved/unmonitored AI research, and then people actually adhere to the agreement. That’s a much nicer fantasy.
I don’t expect either of these futures to come about, so we might as well focus our planning energy on plausibly reachable futures.
In regards to ‘slowing down AI research’...
I’m in the camp of “slowing down uncontrolled AI research means starting a world war because it means dropping ICBMs on the data centers of all uncooperative foreign nations (in unprovoked unwarned-of preemptive strikes so they don’t have time to hide their computers) and leaning hard on the cooperative ones to allow placement of <preemptive-striking-government>’s armed guards and full ownership/control of <preemptive-striking-government> over all their public & private data centers.” I still think it would potentially be a wise move at this point given the risks, but I don’t see it as being realistically on the table as a political option. That’s just.… a lot.
As long as I was imagining totally unfeasible solutions, I’d rather imagine a nice one like… the UN calls an emergency meeting, and everyone agrees to cooperate and not do unapproved/unmonitored AI research, and then people actually adhere to the agreement. That’s a much nicer fantasy.
I don’t expect either of these futures to come about, so we might as well focus our planning energy on plausibly reachable futures.