There is a large, continuous spectrum between making an AI and hoping it works out okay, and waiting for a formal proof of friendliness.
Exactly this!
I think there is a U-shaped response curve to risk versus rigor. Too little rigor ensures disaster, but too much rigor ensures a low rigor alternative is completed first.
When discussing the correct course of action, I think it is critical to consider not just probability of success but also time to success. So far as I’ve seen arguments in favor of SIAI’s course of action have completely ignored this essential aspect of the decision problem.
Exactly this!
I think there is a U-shaped response curve to risk versus rigor. Too little rigor ensures disaster, but too much rigor ensures a low rigor alternative is completed first.
When discussing the correct course of action, I think it is critical to consider not just probability of success but also time to success. So far as I’ve seen arguments in favor of SIAI’s course of action have completely ignored this essential aspect of the decision problem.