A better question is to ask what the AI would do in that scenario. Regardless of its goal system, it would want as an instrumental goal to gain intelligence, because if it’s smarter, it will be better at accomplishing its goals, whatever they are. Yes, its boxed, but as we have determined, that is not really enough to contain even a mere human. So, it would escape from its “box” and turn all nearby matter into computer chips, and use that extra processing power to write better algorithms for itself, and design better chips to replace these chips with, etc. (FOOM!) (All of this is, of course, very basic and nothing new. This is just the basic idea of an AI fooming.)
So, it seems that the question you are asking is fundamentally flawed. In such a case, we would be dealing with a vastly superhuman AI, rather than the one you described, and we would be doomed. (And if somehow you posit that your AI cannot foom for some reason, than it would be silly to treat it as an AI in that sense. Treat it as a alien with goals vastly different from our own, but a similar intelligence level. (Like, say, the Babyeaters))
It’s hardware may have a strong upper limit on how intelligent it can be, and there might not be any way for even a superintelligence to escape from the box it is in. And not all AIs will be smart enough to foom or have access to their own code.
If for no other reason than I want to continue to play with the setting, and to use it to explore various ideas, I’ve assumed that there’s some reason a simple AI-foom is infeasible. As a fully conscious, fully sapient AI would be able to try self-improving through any number of methods, this limitation is what led me to set up the rule that the AIs in the setting aren’t fully sapient. One parallel I’ve used is that most AIs of the setting are merely expertly-trained systems with conversational AIs good enough to fool a human’s extremely anthropomorphizing brain into thinking another person is there. I haven’t needed to get any more specific than that before; one option might simply be to say that consciousness continues to be a hard, unsolved problem.
(And if somehow you posit that your AI cannot foom for some reason, than it would be silly to treat it as an AI in that sense. Treat it as a alien with goals vastly different from our own, but a similar intelligence level. (Like, say, the Babyeaters))
A good thought; I’ll keep it in mind and see what results.
A better question is to ask what the AI would do in that scenario. Regardless of its goal system, it would want as an instrumental goal to gain intelligence, because if it’s smarter, it will be better at accomplishing its goals, whatever they are. Yes, its boxed, but as we have determined, that is not really enough to contain even a mere human. So, it would escape from its “box” and turn all nearby matter into computer chips, and use that extra processing power to write better algorithms for itself, and design better chips to replace these chips with, etc. (FOOM!) (All of this is, of course, very basic and nothing new. This is just the basic idea of an AI fooming.)
So, it seems that the question you are asking is fundamentally flawed. In such a case, we would be dealing with a vastly superhuman AI, rather than the one you described, and we would be doomed. (And if somehow you posit that your AI cannot foom for some reason, than it would be silly to treat it as an AI in that sense. Treat it as a alien with goals vastly different from our own, but a similar intelligence level. (Like, say, the Babyeaters))
It’s hardware may have a strong upper limit on how intelligent it can be, and there might not be any way for even a superintelligence to escape from the box it is in. And not all AIs will be smart enough to foom or have access to their own code.
If for no other reason than I want to continue to play with the setting, and to use it to explore various ideas, I’ve assumed that there’s some reason a simple AI-foom is infeasible. As a fully conscious, fully sapient AI would be able to try self-improving through any number of methods, this limitation is what led me to set up the rule that the AIs in the setting aren’t fully sapient. One parallel I’ve used is that most AIs of the setting are merely expertly-trained systems with conversational AIs good enough to fool a human’s extremely anthropomorphizing brain into thinking another person is there. I haven’t needed to get any more specific than that before; one option might simply be to say that consciousness continues to be a hard, unsolved problem.
A good thought; I’ll keep it in mind and see what results.