In this subscenario, does the AGI eventually become superintelligent? If so, don’t we still need a reason why it doesn’t disassemble humans at that point, which might be A, B, C or D?
XiXiDu seemed to place importance on the possibility of “expert systems” that don’t count as AGI beating the general intelligence in some area. Since we were discussing risk to humanity, I take this to include the unstated premise that defense could somehow become about as easy as offense if not easier. (Tell us if that seems wrong, Xi.)
In this subscenario, does the AGI eventually become superintelligent? If so, don’t we still need a reason why it doesn’t disassemble humans at that point, which might be A, B, C or D?
XiXiDu seemed to place importance on the possibility of “expert systems” that don’t count as AGI beating the general intelligence in some area. Since we were discussing risk to humanity, I take this to include the unstated premise that defense could somehow become about as easy as offense if not easier. (Tell us if that seems wrong, Xi.)
I guess that is D I’m thinking of.