We don’t want the first AI that FOOMs effectively to win. We want a provably Friendly AI to win.
This seems as though it is framing the problem incorrectly to me. Today’s self-improving systems are corporations. They are a mix of human and machine components. Nobody proves anything about their self-improvement trajectories—but that doesn’t necessarily mean that they are destined to go off the rails. The idea that growth will be so explosive that it can’t be dynamically steered neglects the possibility of throttles.
A “provably-Friendly AI” doesn’t look very likely to win—so due attention should be give to all the other possibilities with the potential to produce a positive outcome.
This seems as though it is framing the problem incorrectly to me. Today’s self-improving systems are corporations. They are a mix of human and machine components. Nobody proves anything about their self-improvement trajectories—but that doesn’t necessarily mean that they are destined to go off the rails. The idea that growth will be so explosive that it can’t be dynamically steered neglects the possibility of throttles.
A “provably-Friendly AI” doesn’t look very likely to win—so due attention should be give to all the other possibilities with the potential to produce a positive outcome.