For example I’ve mostly stopped working on decision theory because it seems to help UFAI as much as FAI.
I think there are potential avenues of development of decision theory that might help FAI more than uFAI; I think maybe you should talk to Steve Rayhawk to see if he has any thoughts about this.
Anyway I praise your prudence, especially as it seems like a real logical possibility that AGI can’t be engineered without first solving self-reference and logical uncertainty.
I think there are potential avenues of development of decision theory that might help FAI more than uFAI; I think maybe you should talk to Steve Rayhawk to see if he has any thoughts about this.
Anyway I praise your prudence, especially as it seems like a real logical possibility that AGI can’t be engineered without first solving self-reference and logical uncertainty.