it explicitly granted that if we presume a FOOM, then yes, trying to do anything with heuristic soups seems useless and just something that will end up killing us all.
and whether there could be a soft takeoff during which some people prevented those powerful-but-not-yet-superintelligent heuristic soups from killing everyone while others put the finishing touches on the AGI that could actually be trusted to remain Friendly when it actually did FOOM.
I’m not sure how this could work, if provably-Friendly AI has a significant speed disadvantage, as the OP argues. You can develop all kinds of safety “plugins” for heuristic AIs, but if some people just don’t care about the survival of humans or of humane values (as we understand it), then they’re not going to use your ideas.
provably-Friendly AI has a significant speed disadvantage, as the OP argues.
Yes, the OP made that point. But I have heard the opposite from SI-ers—or at least they said that in the future SI’s research may lead to implementation secrets that should not be shared with others. I didn’t understand why that should be.
or at least they said that in the future SI’s research may lead to implementation secrets that should not be shared with others. I didn’t understand why that should be.
It seems pretty understandable to me… SI may end up having some insights that could speed up UFAI progress if made public, and at the same time provably-Friendly AI may be much more difficult than UFAI. For example, suppose that in order to build a provably-Friendly AI, you may have to first understand how to build an AI that works with an arbitrary utility function, and then it will take much longer to figure out how to specify the correct utility function.
Maybe it shouldn’t be granted so readily?
I’m not sure how this could work, if provably-Friendly AI has a significant speed disadvantage, as the OP argues. You can develop all kinds of safety “plugins” for heuristic AIs, but if some people just don’t care about the survival of humans or of humane values (as we understand it), then they’re not going to use your ideas.
Yes, the OP made that point. But I have heard the opposite from SI-ers—or at least they said that in the future SI’s research may lead to implementation secrets that should not be shared with others. I didn’t understand why that should be.
It seems pretty understandable to me… SI may end up having some insights that could speed up UFAI progress if made public, and at the same time provably-Friendly AI may be much more difficult than UFAI. For example, suppose that in order to build a provably-Friendly AI, you may have to first understand how to build an AI that works with an arbitrary utility function, and then it will take much longer to figure out how to specify the correct utility function.