FYI, the reason the post is titled “Why don’t people use formal methods?” is because that is what the link is called, not because I was wondering why people in AI safety don’t use formal methods :) I don’t think formal methods are likely to be a magic bullet for AI safety. However, if one wants a provably safe AI, it seems like developing the capability to write provably correct software in easier cases could be a good first step (a necessary but not sufficient condition, if you will), and the article discusses our progress towards that goal. The article also discusses the specification problem a bit, and it seems plausible that insights from how to do formal specifications for software in general will transfer to FAI. The field seems neglected in general, relative to something like machine learning: “There really isn’t a formal methods community so much as a few tiny bands foraging in the Steppe”. Finally, my own personal judgement: the “writing down a specification of what we want for an AI to optimize” part seems like it might be more amenable to being solved with a sudden flash of insight (in the same way Nick Bostrom has suggested that the problem of AGI more generally might be solved with a sudden flash of insight), whereas the problem of formally verifying software seems less like that, and if it ends up being a long slog, it might actually constitute the majority of the problem. So for those reasons, it seemed to me like LW might benefit from additional formal verification awareness/discussion.
FYI, the reason the post is titled “Why don’t people use formal methods?” is because that is what the link is called, not because I was wondering why people in AI safety don’t use formal methods :)
Got it, my mistake.
“There really isn’t a formal methods community so much as a few tiny bands foraging in the Steppe”
I’ve got to disagree with that, I used to be very closely adjacent to the formal methods community, if not a part of it myself. It’s much bigger than eg. the academic AI safety community. Sure, it’s not as big as ML, but almost nothing is as big as ML.
The rest of the reasons seem reasonable. I personally disagree that the capability to write provably correct software in easier cases is a necessary first step, but I don’t think I have particularly strong explicit reasons for that position.
ETA: Also I do know a few AI safety people explicitly excited about formal methods and verification who will likely be working on it in the near future, if not already.
FYI, the reason the post is titled “Why don’t people use formal methods?” is because that is what the link is called, not because I was wondering why people in AI safety don’t use formal methods :) I don’t think formal methods are likely to be a magic bullet for AI safety. However, if one wants a provably safe AI, it seems like developing the capability to write provably correct software in easier cases could be a good first step (a necessary but not sufficient condition, if you will), and the article discusses our progress towards that goal. The article also discusses the specification problem a bit, and it seems plausible that insights from how to do formal specifications for software in general will transfer to FAI. The field seems neglected in general, relative to something like machine learning: “There really isn’t a formal methods community so much as a few tiny bands foraging in the Steppe”. Finally, my own personal judgement: the “writing down a specification of what we want for an AI to optimize” part seems like it might be more amenable to being solved with a sudden flash of insight (in the same way Nick Bostrom has suggested that the problem of AGI more generally might be solved with a sudden flash of insight), whereas the problem of formally verifying software seems less like that, and if it ends up being a long slog, it might actually constitute the majority of the problem. So for those reasons, it seemed to me like LW might benefit from additional formal verification awareness/discussion.
Got it, my mistake.
I’ve got to disagree with that, I used to be very closely adjacent to the formal methods community, if not a part of it myself. It’s much bigger than eg. the academic AI safety community. Sure, it’s not as big as ML, but almost nothing is as big as ML.
The rest of the reasons seem reasonable. I personally disagree that the capability to write provably correct software in easier cases is a necessary first step, but I don’t think I have particularly strong explicit reasons for that position.
ETA: Also I do know a few AI safety people explicitly excited about formal methods and verification who will likely be working on it in the near future, if not already.
Thanks for the info!