That’s not how William Tell managed it. He had to practice aiming at less-dangerous targets until he became an expert, and only then did he attempt to shoot the apple.
It is not clear to me that it is desirable to prejudge what an artificial intelligence should desire or conclude, or even possible to purposefully put real constraints on it in the first place. We should simply create the god, then acknowledge the truth: that we aren’t capable of evaluating the thinking of gods.
It is not clear to me that it is desirable to prejudge what an artificial intelligence should desire or conclude,
But it shouldn’t conclude that throwing large asteroids at Yellowstone is a good idea, nor desire to do it. If you follow this strategy, you’ll doom us. Simple as that.
That’s not how William Tell managed it. He had to practice aiming at less-dangerous targets until he became an expert, and only then did he attempt to shoot the apple.
It is not clear to me that it is desirable to prejudge what an artificial intelligence should desire or conclude, or even possible to purposefully put real constraints on it in the first place. We should simply create the god, then acknowledge the truth: that we aren’t capable of evaluating the thinking of gods.
But it shouldn’t conclude that throwing large asteroids at Yellowstone is a good idea, nor desire to do it. If you follow this strategy, you’ll doom us. Simple as that.