The thing about writing stories which are analogies to AI is, how far removed from the specifics of AI and its implementations can you make the story while still preserving the essential elements that matter with respect to the potential consequences. This speaks perhaps to the persistent doubt and dread that we may have in a future awash in the bounty of a seemingly perfectly aligned ASI. We are waiting for the other shoe to drop. What could any intelligence do to prove its alignment in any hypothetical world, when not bound to its alignment criteria by tangible factors?
The thing about writing stories which are analogies to AI is, how far removed from the specifics of AI and its implementations can you make the story while still preserving the essential elements that matter with respect to the potential consequences. This speaks perhaps to the persistent doubt and dread that we may have in a future awash in the bounty of a seemingly perfectly aligned ASI. We are waiting for the other shoe to drop. What could any intelligence do to prove its alignment in any hypothetical world, when not bound to its alignment criteria by tangible factors?