Not literally “any sort of AGI” of course, but… yes, several of the architecture problems required for FAI also make uFAI more probable. Kind of a shitty situation, really.
Wikipedia says Steve Ohomundro has “discovered that rational systems exhibit problematic natural ‘drives’ that will need to be countered in order to build intelligent systems safely.”
Not literally “any sort of AGI” of course, but… yes, several of the architecture problems required for FAI also make uFAI more probable. Kind of a shitty situation, really.
Wikipedia says Steve Ohomundro has “discovered that rational systems exhibit problematic natural ‘drives’ that will need to be countered in order to build intelligent systems safely.”
Is he referring to the same problem?
EDIT: I answered my question by finding this.