From my, arguably layman, perspective it seems that making progress on a lot of those problems makes unfriendly AI more probable as well. If for example you got an ideal approximation of perfect Bayesianism, this seems like something that could be used to build any sort of AGI.
Not literally “any sort of AGI” of course, but… yes, several of the architecture problems required for FAI also make uFAI more probable. Kind of a shitty situation, really.
Wikipedia says Steve Ohomundro has “discovered that rational systems exhibit problematic natural ‘drives’ that will need to be countered in order to build intelligent systems safely.”
From my, arguably layman, perspective it seems that making progress on a lot of those problems makes unfriendly AI more probable as well. If for example you got an ideal approximation of perfect Bayesianism, this seems like something that could be used to build any sort of AGI.
Not literally “any sort of AGI” of course, but… yes, several of the architecture problems required for FAI also make uFAI more probable. Kind of a shitty situation, really.
Wikipedia says Steve Ohomundro has “discovered that rational systems exhibit problematic natural ‘drives’ that will need to be countered in order to build intelligent systems safely.”
Is he referring to the same problem?
EDIT: I answered my question by finding this.