I am looking for results showing that various approaches to provable safety are impossible or that such proofs are of a particular complexity class. I have Yampolskiy’s paper “Impossibility Results in AI: A Survey,” but I am looking for more targeted results that would help guide research into provable safety. Many of the results seem to be from the Computability theory and are so general that they are not that useful.
A theorem stating that one cannot formally verify all possible mathematical proofs does little to say about which constrained system can be verified.
I would also be interested in impossibility results in non-trivial toy models of alignment problems (RL environments) that are not simply the corollary of the much more general theorems.
Lastly, given everything written above, I would also like any other reference/information that a person may reasonably expect me to find interesting and generally related.
[Question] Searching for Impossibility Results or No-Go Theorems for provable safety.
I am looking for results showing that various approaches to provable safety are impossible or that such proofs are of a particular complexity class. I have Yampolskiy’s paper “Impossibility Results in AI: A Survey,” but I am looking for more targeted results that would help guide research into provable safety. Many of the results seem to be from the Computability theory and are so general that they are not that useful.
A theorem stating that one cannot formally verify all possible mathematical proofs does little to say about which constrained system can be verified.
I would also be interested in impossibility results in non-trivial toy models of alignment problems (RL environments) that are not simply the corollary of the much more general theorems.
Lastly, given everything written above, I would also like any other reference/information that a person may reasonably expect me to find interesting and generally related.