or at least they said that in the future SI’s research may lead to implementation secrets that should not be shared with others. I didn’t understand why that should be.
It seems pretty understandable to me… SI may end up having some insights that could speed up UFAI progress if made public, and at the same time provably-Friendly AI may be much more difficult than UFAI. For example, suppose that in order to build a provably-Friendly AI, you may have to first understand how to build an AI that works with an arbitrary utility function, and then it will take much longer to figure out how to specify the correct utility function.
It seems pretty understandable to me… SI may end up having some insights that could speed up UFAI progress if made public, and at the same time provably-Friendly AI may be much more difficult than UFAI. For example, suppose that in order to build a provably-Friendly AI, you may have to first understand how to build an AI that works with an arbitrary utility function, and then it will take much longer to figure out how to specify the correct utility function.