To demonstrate how chaos theory imposes some limits on the skill of an arbitrary intelligence, I will also look at a game: pinball.
If you watch a few games of professional pinball, the answer becomes clear. The strategy typically is to catch the ball with the flippers, then to carefully hit the balls so that it takes a particular ramp which scores a lot of points and then returns the ball to the flippers. Professional pinball players try to avoid the parts of the board where the motion is chaotic. This is a good strategy because, if you cannot predict the motion of the ball, you cannot guarantee that it will not fall directly between the flippers where you cannot save it. Instead, professional pinball players score points mostly from the non-chaotic regions where it is possible to predict the motion of the pinball.
I don’t see anything you wrote that I literally disagree with. But the emphasis seems weird. “You can’t predict everything about a game of pinball” would seem less weird. But what an agent wants to predict about something, what counts as good prediction, depends on what the agent is trying to do. As you point out, there’s predictable parts, and I claim the world is very much like this: there are many domains in which there’s many good-enough predictabilities, such that a superintelligence will not have a human-recognizable skill cieling.
I don’t see anything you wrote that I literally disagree with. But the emphasis seems weird. “You can’t predict everything about a game of pinball” would seem less weird. But what an agent wants to predict about something, what counts as good prediction, depends on what the agent is trying to do. As you point out, there’s predictable parts, and I claim the world is very much like this: there are many domains in which there’s many good-enough predictabilities, such that a superintelligence will not have a human-recognizable skill cieling.