I agree that if TRIZ-Ingenieur thinks regulatory bodies are strong, infallible, and incorruptible, then he is wrong. I don’t see any particular reason to think he thinks that, though. It may in fact suffice for regulatory bodies’ weaknesses, errors and corruptions to be different from those of the individual humans being regulated, which they often are.
(I do not get the impression that T-I thinks “mere humans can’t be trusted with AI development” in any useful sense[1].)
[1] Example of a not-so-useful sense: it is probably true that mere humans can’t with 100% confidence of safety be trusted with AI development, or with anything else, and indeed the same will be true of regulatory bodies. But this doesn’t yield a useful argument against AI development for anyone who cares about averages and probabilities rather than only about the very worst case.
I agree that if TRIZ-Ingenieur thinks regulatory bodies are strong, infallible, and incorruptible, then he is wrong. I don’t see any particular reason to think he thinks that, though. It may in fact suffice for regulatory bodies’ weaknesses, errors and corruptions to be different from those of the individual humans being regulated, which they often are.
(I do not get the impression that T-I thinks “mere humans can’t be trusted with AI development” in any useful sense[1].)
[1] Example of a not-so-useful sense: it is probably true that mere humans can’t with 100% confidence of safety be trusted with AI development, or with anything else, and indeed the same will be true of regulatory bodies. But this doesn’t yield a useful argument against AI development for anyone who cares about averages and probabilities rather than only about the very worst case.