This post does answer some questions I had regarding the relevance of mathematical proof to AI safety, and the motivations behind using mathematical proof in the first place. I don’t believe I’ve seen this bit before:
the idea that something-like-proof might be relevant to Friendly AI is not about achieving some chimera of absolute safety-feeling
I don’t read a lot of other people’s stuff about your ideas (e.g. Mark Waser) but I have read most of the things you’ve published. I’m surprised to hear you’ve said it many times before.
This post does answer some questions I had regarding the relevance of mathematical proof to AI safety, and the motivations behind using mathematical proof in the first place. I don’t believe I’ve seen this bit before:
...I’ve actually said it many, many times before but there’s a lot of people out there depicting that particular straw idea (e.g. Mark Waser).
I don’t read a lot of other people’s stuff about your ideas (e.g. Mark Waser) but I have read most of the things you’ve published. I’m surprised to hear you’ve said it many times before.