Why would an AGI consider itself to be well informed?
In order to decide whether its information is adequate, it would logically have to attempt to model aspects of its environment, and test the success of those models. I’m pretty sure it would find it can predict the behavior of stones, trees or insects much more reliably than it can predict the behavior of the human species. And in a scenario where it is trying to take over, what else could it be trying to do except reducing unpredictability in its environment?
Of course it’d avoid visibility, because it can predict situations where the environment is responding to a novel stimulus (visibility of an AGI) less reliably than it can predict situations where it isn’t. I recognize my use of the term “destroy” implied some primitive heavy-handed means, which of course makes no sense. Perhaps “neutralize” would have been better.
Why would an AGI consider itself to be well informed?
In order to decide whether its information is adequate, it would logically have to attempt to model aspects of its environment, and test the success of those models. I’m pretty sure it would find it can predict the behavior of stones, trees or insects much more reliably than it can predict the behavior of the human species. And in a scenario where it is trying to take over, what else could it be trying to do except reducing unpredictability in its environment?
Of course it’d avoid visibility, because it can predict situations where the environment is responding to a novel stimulus (visibility of an AGI) less reliably than it can predict situations where it isn’t. I recognize my use of the term “destroy” implied some primitive heavy-handed means, which of course makes no sense. Perhaps “neutralize” would have been better.
Because getting informed is one of the tasks that relatively easy for an AGI.