Me: AGI is a William Tell target. A near miss could be very unfortunate. We can’t responsibly take a proper shot till we have an appropriate level of understanding and confidence of accuracy.
Caledonian: That’s not how William Tell managed it. He had to practice aiming at less-dangerous targets until he became an expert, and only then did he attempt to shoot the apple.
Yes, by “take a proper shot” I meant shooting at the proper target with proper shots. And yes, practice on less-dangerous targets is necessary, but it’s not sufficient.
It is not clear to me that it is desirable to prejudge what an artificial intelligence should desire or conclude, or even possible to purposefully put real constraints on it in the first place. We should simply create the god, then acknowledge the truth: that we aren’t capable of evaluating the thinking of gods.
I agree we can’t accurately evaluate superintelligent thoughts, but that doesn’t mean we can’t or shouldn’t try to affect what it thinks or what it’s goals are.
I couldn’t do this argument justice. I encourage interested readers to read Eliezer’s paper on coherent extrapolated volition.
Yes, by “take a proper shot” I meant shooting at the proper target with proper shots. And yes, practice on less-dangerous targets is necessary, but it’s not sufficient.
I agree we can’t accurately evaluate superintelligent thoughts, but that doesn’t mean we can’t or shouldn’t try to affect what it thinks or what it’s goals are.
I couldn’t do this argument justice. I encourage interested readers to read Eliezer’s paper on coherent extrapolated volition.