The whole point of the ‘Scary idea’ is that there should be an effective quality control for GAI, otherwise the risks are too big.
At the moment humanity has no idea on how to make an effective quality control—which would be some way to check if an arbitrary AI-in-a-box is Friendly.
Ergo, if a GAI is launched before Friendly AI problem has some solutions, it means that GAI was launched without a quality control performed. Scary. At least to me.
It definitely should have quality control.
The whole point of the ‘Scary idea’ is that there should be an effective quality control for GAI, otherwise the risks are too big.
At the moment humanity has no idea on how to make an effective quality control—which would be some way to check if an arbitrary AI-in-a-box is Friendly.
Ergo, if a GAI is launched before Friendly AI problem has some solutions, it means that GAI was launched without a quality control performed. Scary. At least to me.