Hrmm… Well, if the AI is computable, it can only ever arrive at computable hypotheses, so we can enumerate them with any complete program specification language. I feel like I want to say that anything that isn’t computable, doesn’t matter. What I mean is, if the AI encounters something that is truly outside of its computable hypothesis space, then there’s nothing it can do about it. For concreteness:
TL;DR for paragraph below: our FAI encounters an Orb, which seems to randomly display red or green, and which our FAI really really wants to model accurately.
Say that our successful superintelligent FAI, in its preliminary probing of the local cosmos, has encountered another alien species with its own FAI. HumanFAI and AlienFAI come to an agreement to share the universe equitably. But HumanFAI finds out about a smallish spherical region of space, impervious to all probes. Every second, the Orb (apparently) randomly changes to emit either green light or red light. Many important things causally depend on which color the Orb displays; for some reason the entire alien culture is very causally dependent on the Orb, and the aliens still causally interact with the HumanFAI’s domain. That is, the utility of the universe under HumanFAI’s utility function is causally affected by the Orb. Thus, the AI cares about the Orb, as in it wants to model it accurately.
However, try as it might, our poor computable AI cannot do even an epsilon better than random in predicting the Orb. This is because the Orb is not computable. My point is that this is not distinguishable from a problem that the AI just can’t solve given its current resources. If the prediction problem is really hard, but nevertheless the AI can gain useful information about how the aliens will behave… then either the AI is modelling the alien species, plus a Truly Random variable (in the classical statistics sense) for the Orb, or the AI can do better than random at predicting the Orb.
Therefore, if our AI ever encountered something Truly Random or otherwise Really Weird (that is, something that is coherent in whatever way it has to be in order to be real, but not computable), then the AI would not and could not do better than it would by just reacting as though it was a problem too hard to solve, and modelling it as a random variable. For things that seemed random or weird but that were actually computable, the AI would naturally (if we’ve done our job) become smart enough or think long enough to solve the problem, or at least work around it. For things that were really truly unpredictable by a computable hypothesis, the same thing would happen. It’s just a special case where the AI never gets around to solving it.
Declaring uncomputable things irrelevant hopefully isn’t too crippling in practice; the universe looks computable, Time Turners can maybe be brute-forced, etc. Now, that doesn’t really answer the question. What do we do about uncomputable universes? Again, nothing… except if there is a chance of hypercomputation. But even if an AI is trying to somehow harness a hypercomputation to do better than chance at dealing with an uncomputable facet of reality, it still has to do figure out how to do the harnessing using its current computable hypotheses and the rest of its computable self.
In other words, hypercomputation isn’t a special case. It’s still a part of reality that correlates in some way with another part of reality (right? I’m, like, totally out of my depth here, but as long as we’re speculating...). The AI can notice and use this, while still only working from computable hypotheses. It should do this naturally, even operating under computable hypotheses, if it sees some way of expanding its (hyper)computational abilities.
TL;DR: whether or not the universe is computable, the AI can’t do better than computable hypotheses. The differences between reality and the best hypotheses that the AI can muster will be unavoidable, since the AI is computable. It can harness hypercomputation, but it still does so working from its computable hypotheses. Unless we program an uncomputable AI. Are you trying to ask how the AI should write an uncomputable extension of itself if it encounters hypercomputation?
Hrmm… Well, if the AI is computable, it can only ever arrive at computable hypotheses, so we can enumerate them with any complete program specification language. I feel like I want to say that anything that isn’t computable, doesn’t matter. What I mean is, if the AI encounters something that is truly outside of its computable hypothesis space, then there’s nothing it can do about it. For concreteness:
TL;DR for paragraph below: our FAI encounters an Orb, which seems to randomly display red or green, and which our FAI really really wants to model accurately.
However, try as it might, our poor computable AI cannot do even an epsilon better than random in predicting the Orb. This is because the Orb is not computable. My point is that this is not distinguishable from a problem that the AI just can’t solve given its current resources. If the prediction problem is really hard, but nevertheless the AI can gain useful information about how the aliens will behave… then either the AI is modelling the alien species, plus a Truly Random variable (in the classical statistics sense) for the Orb, or the AI can do better than random at predicting the Orb.
Therefore, if our AI ever encountered something Truly Random or otherwise Really Weird (that is, something that is coherent in whatever way it has to be in order to be real, but not computable), then the AI would not and could not do better than it would by just reacting as though it was a problem too hard to solve, and modelling it as a random variable. For things that seemed random or weird but that were actually computable, the AI would naturally (if we’ve done our job) become smart enough or think long enough to solve the problem, or at least work around it. For things that were really truly unpredictable by a computable hypothesis, the same thing would happen. It’s just a special case where the AI never gets around to solving it.
Declaring uncomputable things irrelevant hopefully isn’t too crippling in practice; the universe looks computable, Time Turners can maybe be brute-forced, etc. Now, that doesn’t really answer the question. What do we do about uncomputable universes? Again, nothing… except if there is a chance of hypercomputation. But even if an AI is trying to somehow harness a hypercomputation to do better than chance at dealing with an uncomputable facet of reality, it still has to do figure out how to do the harnessing using its current computable hypotheses and the rest of its computable self.
In other words, hypercomputation isn’t a special case. It’s still a part of reality that correlates in some way with another part of reality (right? I’m, like, totally out of my depth here, but as long as we’re speculating...). The AI can notice and use this, while still only working from computable hypotheses. It should do this naturally, even operating under computable hypotheses, if it sees some way of expanding its (hyper)computational abilities.
TL;DR: whether or not the universe is computable, the AI can’t do better than computable hypotheses. The differences between reality and the best hypotheses that the AI can muster will be unavoidable, since the AI is computable. It can harness hypercomputation, but it still does so working from its computable hypotheses. Unless we program an uncomputable AI. Are you trying to ask how the AI should write an uncomputable extension of itself if it encounters hypercomputation?