Imagine you are in charge of choosing how fast deep mind develops tech. Go too fast and you have a smaller chance of alignment. Go too slow and north Korea may beat you.
There isn’t much reason to go significantly faster than north Korea in this scenario. If you can go a bit faster and still make something probably aligned, do that.
In a worse situation, taking your time and hoping for a drone strike on north Korea is probably the best bet.
Which would plausibly be the case if researchers in democracies are trying to get as close to the AGI line as is allowed (because that gives useful capabilities), which in turn seems much more plausible to me than democracies globally coordinating to avoid anything even vaguely close to AGI.
Coordinating on a fuzzy boundary no one can define or measure is really hard. If coordination happens, it will be to avoid something simple, like any project using more than X compute.
I don’t think Conceptually close to AGI = Profitable. There is simple dumb money making code. And there is code that contains all the ideas for AGI, but is missing one tiny piece, and so is useless.
Imagine you are in charge of choosing how fast deep mind develops tech. Go too fast and you have a smaller chance of alignment. Go too slow and north Korea may beat you.
There isn’t much reason to go significantly faster than north Korea in this scenario. If you can go a bit faster and still make something probably aligned, do that.
In a worse situation, taking your time and hoping for a drone strike on north Korea is probably the best bet.
Coordinating on a fuzzy boundary no one can define or measure is really hard. If coordination happens, it will be to avoid something simple, like any project using more than X compute.
I don’t think Conceptually close to AGI = Profitable. There is simple dumb money making code. And there is code that contains all the ideas for AGI, but is missing one tiny piece, and so is useless.