There’s a subscenario of c.ii that I think is worth considering: there turns out to be some good theoretical reason why even an AGI with access to and full-stack understanding of its own source code cannot FOOM—a limit of some sort on the rate of self-improvement. (Or is this already covered by D?)
In this subscenario, does the AGI eventually become superintelligent? If so, don’t we still need a reason why it doesn’t disassemble humans at that point, which might be A, B, C or D?
XiXiDu seemed to place importance on the possibility of “expert systems” that don’t count as AGI beating the general intelligence in some area. Since we were discussing risk to humanity, I take this to include the unstated premise that defense could somehow become about as easy as offense if not easier. (Tell us if that seems wrong, Xi.)
There’s a subscenario of c.ii that I think is worth considering: there turns out to be some good theoretical reason why even an AGI with access to and full-stack understanding of its own source code cannot FOOM—a limit of some sort on the rate of self-improvement. (Or is this already covered by D?)
In this subscenario, does the AGI eventually become superintelligent? If so, don’t we still need a reason why it doesn’t disassemble humans at that point, which might be A, B, C or D?
XiXiDu seemed to place importance on the possibility of “expert systems” that don’t count as AGI beating the general intelligence in some area. Since we were discussing risk to humanity, I take this to include the unstated premise that defense could somehow become about as easy as offense if not easier. (Tell us if that seems wrong, Xi.)
I guess that is D I’m thinking of.