Apparently, for every time and space bounds (t,l), Hutter has defined an algorithm AIXI(t,l) that performs optimally assuming “the environment is computable”. Sounds like an awesome model to compare our own rationality against:
Meh. I was excited about it at first, but here’s the problem: that claim, at least as you’ve phrased it, is wrong. It would only be correct if you mean average performance over all computable environments, having an Occamian prior as their only knowledge.
But if you consider agents that operate in this computable environment, there are numerous examples that perform better than AIXI-tl. Specifically, all animals do. The reason is that they don’t, like AIXI/AIXI-tl, start from tabula rasa, but are born with implicit knowledge about their environments (that is closer to the truth than a mere Occamian prior) and implicit heuristics for exploiting them.
That’s why Hutter isn’t just running AIXI-tl and outfoxing them all. (pardon the pun)
More generally, there’s the important issue that we shouldn’t want an agent that performs optimally, on average, over all computable environments, if that means sacrificing performance in this environment, which it almost certainly does.
More generally, there’s the important issue that we shouldn’t want an agent that performs optimally, on average, over all computable environments, if that means sacrificing performance in this environment, which it almost certainly does.
Couldn’t we patch this easily? Supply the agent with all prior observations. If it’s optimal, it’ll make better use of the observations than we or animals would.
Or to put it another way: wouldn’t AIXI-tl perform optimally on average over all computable environments consistent with our history? That seems desirable.
(Interesting thought: evolution can only discover a few bits every generation, and likely our genomes and cellular environments embody many fewer than that upper limit of a few gigabytes; how many observations would we have to supply a new AIXI-tl before it caught up to us?)
Longer answer: If we could somehow represent the knowledge we have (both explicit and implicit) in a format that integrates nicely with the way the AIXI-approximating program stores its own knowledge, then we could “bring it up to speed” with where we are, and let it learn from there. The knowledge we give it would make it throw out the spurious solutions it would otherwise have to consider.
But representation of our knowledge, especially the important implicit knowledge we never realize we’re using, is very hard—probably AI-complete. So it doesn’t reduce the problem. And any knowledge you’d be feeding it would be found by some human way, when finding this knowledge is the very thing you’re supposed to be automating! But yes, that would work.
Or to put it another way: wouldn’t AIXI-tl perform optimally on average over all computable environments consistent with our history?
I’m not sure, but I don’t think so. The requirement that it perform optimally on average (probability weighted) over all computable environments consistent with its history means that it has to pass some of its performance into those other environments, which can still be drastically different.
One thing to keep in mind: Though we like to neatly model beliefs, values, and observations as cleanly separated, Nature is under no obligation to respect the difference—in nature, these are all blended together, such that the three can only be inferred from the overall dynamics. This is why I want to set up some kind of model of an environment that is constrained by (some analog of) the laws of thermodynamics and chemistry so that we can see the development of (phenomena we could describe as) complexity, intelligence, and life, but without explicitly programming any of those into the model.
If we could somehow represent the knowledge we have (both explicit and implicit) in a format that integrates nicely with the way the AIXI-approximating program stores its own knowledge, then we could “bring it up to speed” with where we are, and let it learn from there.
Too many restrictions there, I think. The format doesn’t have to be nice—any format it doesn’t know it will know after a fixed-length penalty. We could just dump in the Internet Archive raw.
The requirement that it perform optimally on average (probability weighted) over all computable environments consistent with its history means that it has to pass some of its performance into those other environments, which can still be drastically different.
But those other environments where it performs poorly are the ones that ought to perform poorly: its best performance is reserved for the most likely future histories, just like we would aspire to.
We would perform poorly in an anti-Occamian universe just like it would, but we’re far from optimal and so would perform worse in other scenarios, I would think. I suppose we could be so biased and incorrect that we luck out and our biases and errors are just right, but is it plausible that we could luck out enough to overcome the general performance difference?
Too many restrictions there, I think. The format doesn’t have to be nice—any format it doesn’t know it will know after a fixed-length penalty. We could just dump in the Internet Archive raw.
Alright, the thing I meant by “nice” needs some elaboration. I’ll put it this way: For an Ultimate AI, all of its knowledge—with no exceptions—is in terms of constraints on what it expects to observe. (And yes, this is what rationalists should strive for too.) So there is no, “light is waves”, no “that’s the Doppler effect”. There are only mappings from inputs to probability distributions on future inputs. (Confusingly, this would also mean an expectation for [phenomena explainable as] humans [generating sound waves explainable by them] saying [something we would recognize as the statement] “light is waves”. Phew!)
Any large human-generated knowledge base initially appears to AIXI as a long string of characters and/or some input/output black box. What in its inputspace do the characters refer to? What is the appropriate way to group them? Most importantly, after being told that “this stuff is true” or “this is a lot of what goes on in the environment that computes your inputs”, how does it know how it maps to the rest of the environment’s generating function? (Which I guess is ultimately the same as the first question.)
That problem is nearly as intractable as starting from just an Occamian prior. It’s only resolved by symbol-grounding, which means representing knowledge in the form of a probability distribution on observations, in a way your program can understand. Which I think brings you back into the AI-complete realm.
But those other environments where it performs poorly are the ones that ought to perform poorly: its best performance is reserved for the most likely future histories, just like we would aspire to.
Okay, you’re right, I wasn’t keeping the comparison baselines straight. If you could give the program enough knowledge, in a form it understands or could quickly learn, to distinguish this computable environment from substantively different environment (including its location within it, the relevant history, etc.), then yes, it would make better inferences than humans.
But the point stands that dumping any kind of knowledge base on a computable version of AIXI won’t help you a bit until you’ve done a lot more of the cognitive labor.
That’s a good point; in-born priors are much better (as priors) for typical human environments, and plenty of environments we’d like to design robots for.
But consider things like black holes, the relativity of pretty much everything kinematic except light speed, and almost all of quantum mechanics. These theories stopped seeming like outright paradoxes to me precisely when I eased up and starting assuming only that the universe was probably just computational in some sense, and not necessarily “3D-physical” or “spacial” or “temporal” or whatever. So naturally, I’m pleased to find a well-developing theory of “AIXI”, an idealized implementation of this very starting point :)
Meh. I was excited about it at first, but here’s the problem: that claim, at least as you’ve phrased it, is wrong. It would only be correct if you mean average performance over all computable environments, having an Occamian prior as their only knowledge.
But if you consider agents that operate in this computable environment, there are numerous examples that perform better than AIXI-tl. Specifically, all animals do. The reason is that they don’t, like AIXI/AIXI-tl, start from tabula rasa, but are born with implicit knowledge about their environments (that is closer to the truth than a mere Occamian prior) and implicit heuristics for exploiting them.
That’s why Hutter isn’t just running AIXI-tl and outfoxing them all. (pardon the pun)
More generally, there’s the important issue that we shouldn’t want an agent that performs optimally, on average, over all computable environments, if that means sacrificing performance in this environment, which it almost certainly does.
Couldn’t we patch this easily? Supply the agent with all prior observations. If it’s optimal, it’ll make better use of the observations than we or animals would.
Or to put it another way: wouldn’t AIXI-tl perform optimally on average over all computable environments consistent with our history? That seems desirable.
(Interesting thought: evolution can only discover a few bits every generation, and likely our genomes and cellular environments embody many fewer than that upper limit of a few gigabytes; how many observations would we have to supply a new AIXI-tl before it caught up to us?)
Short answer: we can patch it, but not easily.
Longer answer: If we could somehow represent the knowledge we have (both explicit and implicit) in a format that integrates nicely with the way the AIXI-approximating program stores its own knowledge, then we could “bring it up to speed” with where we are, and let it learn from there. The knowledge we give it would make it throw out the spurious solutions it would otherwise have to consider.
But representation of our knowledge, especially the important implicit knowledge we never realize we’re using, is very hard—probably AI-complete. So it doesn’t reduce the problem. And any knowledge you’d be feeding it would be found by some human way, when finding this knowledge is the very thing you’re supposed to be automating! But yes, that would work.
I’m not sure, but I don’t think so. The requirement that it perform optimally on average (probability weighted) over all computable environments consistent with its history means that it has to pass some of its performance into those other environments, which can still be drastically different.
One thing to keep in mind: Though we like to neatly model beliefs, values, and observations as cleanly separated, Nature is under no obligation to respect the difference—in nature, these are all blended together, such that the three can only be inferred from the overall dynamics. This is why I want to set up some kind of model of an environment that is constrained by (some analog of) the laws of thermodynamics and chemistry so that we can see the development of (phenomena we could describe as) complexity, intelligence, and life, but without explicitly programming any of those into the model.
Too many restrictions there, I think. The format doesn’t have to be nice—any format it doesn’t know it will know after a fixed-length penalty. We could just dump in the Internet Archive raw.
But those other environments where it performs poorly are the ones that ought to perform poorly: its best performance is reserved for the most likely future histories, just like we would aspire to.
We would perform poorly in an anti-Occamian universe just like it would, but we’re far from optimal and so would perform worse in other scenarios, I would think. I suppose we could be so biased and incorrect that we luck out and our biases and errors are just right, but is it plausible that we could luck out enough to overcome the general performance difference?
Alright, the thing I meant by “nice” needs some elaboration. I’ll put it this way: For an Ultimate AI, all of its knowledge—with no exceptions—is in terms of constraints on what it expects to observe. (And yes, this is what rationalists should strive for too.) So there is no, “light is waves”, no “that’s the Doppler effect”. There are only mappings from inputs to probability distributions on future inputs. (Confusingly, this would also mean an expectation for [phenomena explainable as] humans [generating sound waves explainable by them] saying [something we would recognize as the statement] “light is waves”. Phew!)
Any large human-generated knowledge base initially appears to AIXI as a long string of characters and/or some input/output black box. What in its inputspace do the characters refer to? What is the appropriate way to group them? Most importantly, after being told that “this stuff is true” or “this is a lot of what goes on in the environment that computes your inputs”, how does it know how it maps to the rest of the environment’s generating function? (Which I guess is ultimately the same as the first question.)
That problem is nearly as intractable as starting from just an Occamian prior. It’s only resolved by symbol-grounding, which means representing knowledge in the form of a probability distribution on observations, in a way your program can understand. Which I think brings you back into the AI-complete realm.
Okay, you’re right, I wasn’t keeping the comparison baselines straight. If you could give the program enough knowledge, in a form it understands or could quickly learn, to distinguish this computable environment from substantively different environment (including its location within it, the relevant history, etc.), then yes, it would make better inferences than humans.
But the point stands that dumping any kind of knowledge base on a computable version of AIXI won’t help you a bit until you’ve done a lot more of the cognitive labor.
That’s a good point; in-born priors are much better (as priors) for typical human environments, and plenty of environments we’d like to design robots for.
But consider things like black holes, the relativity of pretty much everything kinematic except light speed, and almost all of quantum mechanics. These theories stopped seeming like outright paradoxes to me precisely when I eased up and starting assuming only that the universe was probably just computational in some sense, and not necessarily “3D-physical” or “spacial” or “temporal” or whatever. So naturally, I’m pleased to find a well-developing theory of “AIXI”, an idealized implementation of this very starting point :)