E.g. AI regulation (like most technology regulation) is only effective if you get the whole world on board, and without global coordination there’s the potential for arms races.
“Only develop an FAI” also presumes a hard takeoff, and it’s not exactly established beyond all doubt that we’ll have one.
Preventing UFAI or dealing safely with Oracles or using reduced impact AIs requires global coordination. Only the “FAI in a basement” approach doesn’t.
Because FAI is a hard problem. If it were easy then we would not still be paying people $70 trillion per year worldwide to do work that machines aren’t smart enough to do yet.
Why does superintelligence require global coordination? Apparently all one needs to do is to develop an FAI, and the rest will take care of itself.
E.g. AI regulation (like most technology regulation) is only effective if you get the whole world on board, and without global coordination there’s the potential for arms races.
“Only develop an FAI” also presumes a hard takeoff, and it’s not exactly established beyond all doubt that we’ll have one.
Preventing UFAI or dealing safely with Oracles or using reduced impact AIs requires global coordination. Only the “FAI in a basement” approach doesn’t.
Because FAI is a hard problem. If it were easy then we would not still be paying people $70 trillion per year worldwide to do work that machines aren’t smart enough to do yet.
Almost all of these are hard problems. That seems insufficient.
That proposal also involves global coordination.