I expected that Lamport paper would be mentioned, as it describes a known catastrophic mode for autonomous systems, connected with Buridan ass problem and infinite recursion about predicting future time of the problem solving. I think that this problem is underexplored for AI Safety, despite previous attempt to present it on LessWrong.
I expected that Lamport paper would be mentioned, as it describes a known catastrophic mode for autonomous systems, connected with Buridan ass problem and infinite recursion about predicting future time of the problem solving. I think that this problem is underexplored for AI Safety, despite previous attempt to present it on LessWrong.