Apparently word of god is that there is going to be no AI.
I was thinking about this recently, and I realized that maybe it should be kind of obvious why he doesn’t usually do fiction about AI: because (he believes, at least, that) the first strong AI is either an instant win condition or instant failure condition for the entire universe, and neither immutable utopia nor irrecoverable catastrophe make for very interesting stories. So anything interesting or uncertain or suspenseful about AI has to be written about disguised as other topics, where things can go wrong but then realistically be set right.
AI is not an instant-win condition, but it would be fairly quick. There could be drama with the AI trying to develop nanotech, (running up against physical speed constraints rather than mental) before some sort of disaster hits, although this does remove agency from the humans who would mostly be following the AI’s commands.
I think AI can still be part of a story, provided it’s kept towards the final chapter. Developing true self-improving superhuman AI is rather like throwing the ring into Mt Doom—all that remains it to crown the king, mourn the (non-recoverable) dead, and write that everyone lived happily ever after.
Apologies for the self-obsessed diversion, but this is on topic: I’m writing a story which involves not AI but recursively self-improving IA, and I’m beginning to think that this might have been a bad idea for this sort of reason. In my story the situation is somewhat improved by the presence of a good reason why multiple entities begin self-improvement at approximately the same time, which means that conflicts remain. I can’t write superintelligent dialogue, but I’ve handwaved this as saying that most of the character’s mental energy is going towards other activities, leaving their verbal IQ within normal human ranges. The remaining problem is that the other characters become rapidly sidelined.
I was thinking about this recently, and I realized that maybe it should be kind of obvious why he doesn’t usually do fiction about AI: because (he believes, at least, that) the first strong AI is either an instant win condition or instant failure condition for the entire universe, and neither immutable utopia nor irrecoverable catastrophe make for very interesting stories. So anything interesting or uncertain or suspenseful about AI has to be written about disguised as other topics, where things can go wrong but then realistically be set right.
AI is not an instant-win condition, but it would be fairly quick. There could be drama with the AI trying to develop nanotech, (running up against physical speed constraints rather than mental) before some sort of disaster hits, although this does remove agency from the humans who would mostly be following the AI’s commands.
I think AI can still be part of a story, provided it’s kept towards the final chapter. Developing true self-improving superhuman AI is rather like throwing the ring into Mt Doom—all that remains it to crown the king, mourn the (non-recoverable) dead, and write that everyone lived happily ever after.
Apologies for the self-obsessed diversion, but this is on topic: I’m writing a story which involves not AI but recursively self-improving IA, and I’m beginning to think that this might have been a bad idea for this sort of reason. In my story the situation is somewhat improved by the presence of a good reason why multiple entities begin self-improvement at approximately the same time, which means that conflicts remain. I can’t write superintelligent dialogue, but I’ve handwaved this as saying that most of the character’s mental energy is going towards other activities, leaving their verbal IQ within normal human ranges. The remaining problem is that the other characters become rapidly sidelined.