That’s not conventionally considered to be “in the long run”.
We don’t have any theory that would stop AI from doing that
The primary reason is that we don’t have any theory about what a post-singularity AI might or might not do. Doing some pretty basic decision theory focused on the corner cases is not “progress”.
That’s not conventionally considered to be “in the long run”.
The primary reason is that we don’t have any theory about what a post-singularity AI might or might not do. Doing some pretty basic decision theory focused on the corner cases is not “progress”.