A simple way to get yourself into an unpredictable world is to make yourself dumb.
Er, no. At no point will the AI conclude “making my next iteration dumber will successfully make the world more unpredictable.” It will want the world to be more unpredictable, not the appearance of unpredictability to itself (which is just another form of wireheading—and to get a successful AI of any sort, we need to solve the wireheading issue).
I agree that it’s related to the wireheading problem. However unpredictability is a two-argument function and I wouldn’t be that confident about what an AI will or will not conclude.
Er, no. At no point will the AI conclude “making my next iteration dumber will successfully make the world more unpredictable.” It will want the world to be more unpredictable, not the appearance of unpredictability to itself (which is just another form of wireheading—and to get a successful AI of any sort, we need to solve the wireheading issue).
I agree that it’s related to the wireheading problem. However unpredictability is a two-argument function and I wouldn’t be that confident about what an AI will or will not conclude.
It’s estimate of objective factors in the world would not be impaired.