If for no other reason than I want to continue to play with the setting, and to use it to explore various ideas, I’ve assumed that there’s some reason a simple AI-foom is infeasible. As a fully conscious, fully sapient AI would be able to try self-improving through any number of methods, this limitation is what led me to set up the rule that the AIs in the setting aren’t fully sapient. One parallel I’ve used is that most AIs of the setting are merely expertly-trained systems with conversational AIs good enough to fool a human’s extremely anthropomorphizing brain into thinking another person is there. I haven’t needed to get any more specific than that before; one option might simply be to say that consciousness continues to be a hard, unsolved problem.
(And if somehow you posit that your AI cannot foom for some reason, than it would be silly to treat it as an AI in that sense. Treat it as a alien with goals vastly different from our own, but a similar intelligence level. (Like, say, the Babyeaters))
A good thought; I’ll keep it in mind and see what results.
If for no other reason than I want to continue to play with the setting, and to use it to explore various ideas, I’ve assumed that there’s some reason a simple AI-foom is infeasible. As a fully conscious, fully sapient AI would be able to try self-improving through any number of methods, this limitation is what led me to set up the rule that the AIs in the setting aren’t fully sapient. One parallel I’ve used is that most AIs of the setting are merely expertly-trained systems with conversational AIs good enough to fool a human’s extremely anthropomorphizing brain into thinking another person is there. I haven’t needed to get any more specific than that before; one option might simply be to say that consciousness continues to be a hard, unsolved problem.
A good thought; I’ll keep it in mind and see what results.