Wouldn’t a super intelligent, resource gathering agent simply figure out the futility of its prime directive and abort with some kind of error? Surely it would realize it exists in a universe of limited resources and that it had been given an absurd objective. I mean maybe it’s controlled controlled by some sort of “while resources exist consume resources” loop that is beyond its free will to break out of—but if so, should it be considered an “agent”?
Contra humans, who for the moment are electing to consume themselves to extinction, if anything resource consumer AIs would be comparatively benign.
This definitely incidental--
Wouldn’t a super intelligent, resource gathering agent simply figure out the futility of its prime directive and abort with some kind of error? Surely it would realize it exists in a universe of limited resources and that it had been given an absurd objective. I mean maybe it’s controlled controlled by some sort of “while resources exist consume resources” loop that is beyond its free will to break out of—but if so, should it be considered an “agent”?
Contra humans, who for the moment are electing to consume themselves to extinction, if anything resource consumer AIs would be comparatively benign.
“Futility of prime directive” is a values question, not an intelligence question. See http://lesswrong.com/lw/h0k/arguing_orthogonality_published_form/