I agree, with two modifications: (1) rationality of the “taking ideas seriously” kind is critical, without it one can spend any amount of time without getting anywhere, and (2) FAI is not a likely outcome, random AGI could come out of this easily as well, given that you are only assuming fast processing speed and not necessarily self-reinforcing rationality, i.e. essentially future ems.
I concur with both. I add the caveat that the ‘taking ideas seriously’ rational vampire would clearly be best served by vamping willing FAI researchers. That could be expected to raise p(FAI | GAI) up to well above real world values. I say above just because it eliminates some significant contributors towards error (memory failure, fatigue, time pressure and cognitive decline from aging).
I agree, with two modifications: (1) rationality of the “taking ideas seriously” kind is critical, without it one can spend any amount of time without getting anywhere, and (2) FAI is not a likely outcome, random AGI could come out of this easily as well, given that you are only assuming fast processing speed and not necessarily self-reinforcing rationality, i.e. essentially future ems.
I concur with both. I add the caveat that the ‘taking ideas seriously’ rational vampire would clearly be best served by vamping willing FAI researchers. That could be expected to raise p(FAI | GAI) up to well above real world values. I say above just because it eliminates some significant contributors towards error (memory failure, fatigue, time pressure and cognitive decline from aging).