It seems like “proving” an AI design will be friendly is like proving a system of government won’t lead to the economy going bad.
That doesn’t sound to be impossible. Consider that in the case of a seed AI, the “government” only has to deal with one perfectly rational game theoretic textbook agent. The only reason that economists fail to predict how certain policies will affect the economy is that their models often have to deal with a lot of unknown, or unpredictable factors. In the case of an AI, the policy is applied to the model itself, which is a well-defined mathematical entity.
That doesn’t sound to be impossible. Consider that in the case of a seed AI, the “government” only has to deal with one perfectly rational game theoretic textbook agent. The only reason that economists fail to predict how certain policies will affect the economy is that their models often have to deal with a lot of unknown, or unpredictable factors. In the case of an AI, the policy is applied to the model itself, which is a well-defined mathematical entity.