I definitely wouldn’t rule out the possibility of being able to formally define a set of tests that would satisfy our demands for alignment. The most I could say with certainty is that it’s a lot harder than eliminating software security bug classes. But I also wouldn’t rule out the possibility that an optimizing process of arbitrarily strong capability simply could not be aligned, at least to a level of assurance that a human could comprehend.
Thank you for these additional references; I was trying to anchor this article with some very high-level concepts. I very much expect that to succeed we’re going to have to invent and test hundreds of formalisms to be able to achieve any kind of confidence about the alignment of a system.
I definitely wouldn’t rule out the possibility of being able to formally define a set of tests that would satisfy our demands for alignment. The most I could say with certainty is that it’s a lot harder than eliminating software security bug classes. But I also wouldn’t rule out the possibility that an optimizing process of arbitrarily strong capability simply could not be aligned, at least to a level of assurance that a human could comprehend.
Thank you for these additional references; I was trying to anchor this article with some very high-level concepts. I very much expect that to succeed we’re going to have to invent and test hundreds of formalisms to be able to achieve any kind of confidence about the alignment of a system.