Intelligence Metrics and Decision Theories
In this article I review a few previously proposed mathematical metrics of general intelligence and propose my own approach. This approach has a peculiar relation to decision theory, different decision theories corresponding to different notions of “maximally intelligent agent”. I propose a formalization of UDT in this context.
Background
The first (to my knowledge) general intelligence metric was proposed by Legg and Hutter in “Universal Intelligence”. Their proposal was to consider an agent A interacting with environment V where A exerts a sequence of actions ai on V and in turn receives from V the observations oi. Their original proposal was including in the observations a special utility channel ui, so that the utility of A is defined by
This is essentially reinforcement learning. However, general utility functions of the form can be just as easily accommodated by the method.
The Legg-Hutter intelligence metric is given by
Where the expectation value is taken with respect to a Solomonoff distribution for V.
The maximally intelligent agent w.r.t. this metric is the famous AIXI.
This definition suffers from a number of problems:
Efficiency problem: The are no constraints on the function implemented by A. The computing resources allowed for evaluating it are unlimited and it can even be uncomputable (as it is in the case of AIXI).
Duality (Anvil) problem: A is not a part of the physical universe (V) it attempts to model. A is “unbreakble” i.e. nothing that happens in V can modify it as far as the evaluation of U is concerned.
Ontology problem1: U is defined in terms of actions and observations rather than in terms of intrinsic properties of the universe. A Legg-Hutter intelligent A with a naively constructed U would be inclined to wirehead itself.
Degeneracy problem (my own observation): This problem is superficially absent in the original (reinforcement learning) formulation, but for a general U the question arises whether this truly corresponds to the intuitive notion of intelligence. It is clear that for some “degenerate” choices of U very simple agents would be optimal which wouldn’t do any of the things we commonly ascribe to intelligence (e.g. modeling the external universe). In other words, it might make sense to ask whether a given agent is intelligent without specifying U in advance.
Verifiability problem (my own observation): A descriptive model of intelligence should work for real-world intelligent agents (humans). In the real world, intelligence developed by natural selection and therefore must be a verifiable property: that it, it must be possible to measure the intelligence of an agent using a limit amount of computing resource (since Nature only possesses a limited amount of computing resources, as far as we know). Now, the Solomonoff distribution involves randomly generating a program R for computing the transition probabilities of V. Since there are no limitations on the computing resources consumed by R, simulating V can be very costly. On average it will insanely costly, thx to the busy beaver function. This inhibits any feasible way of computing I(A).
Side note: due to it might be that the “correct” definition of I(A) is efficiently computable but it’s not feasible to compute A with human-like level of I(A) (that is it’s not possible to compute it using a short program and scarce computing resources). In this case the success of natural evolution of intelligence would have to be blamed on the Anthropic Principle. It would also mean we cannot construct an AI without using the human brain as a “template” in some sense. Moreover there would be a physical bound on constructing intelligent agents which might rule out the “AI foom” scenario.
Orseau and Ring2 proposed solutions for the efficiency and duality problem.
Solving the efficiency problem is straightforward. Consider a computing machine M (e.g. a universal Turing machine) which produces an action a every fixed cycle (for example a is read off a register in M) and able to receive an observation o every fixed cycle (for example the o is written to a register in M). We can then evaluate I(A(Q)), where A(Q) is the abstract (Legg-Hutter) agent computed by M primed with the program Q. We can then ask for Q*(n), the maximally intelligent program of length at most n. In general it seems to be uncomputable.
The duality problem is more tricky. Orseau and Ring suggest considering a natural number Q which primes the stochastic process
Given a utility function we can define the intelligence of Q to be
For example we can imagine Q to be the state of a computer controlling a robot, or the state of a set of cells within a cellular automaton. This removes the unphysical separation of A and V. However, for me this framework seems too general if we don’t impose any constraints on ρ.
Intelligence in a Quasi-Solomonoff Universe
Consider a universe Y described by a sequence of states {υi}. In order for the Solomonoff distribution to be applicable, we need the state υi to be represented as a natural number. For example Y can consist of a computing machine M interacting with an environment V through actions a and observations o. Or, Y can be a cellular automaton with all but a finite number of cells set to a stationary “vacuum” state.
Consider further a tentative model D of the dynamics of Y. There are several types of model which might be worth considering:
A constraint model only distinguishes between admissible and inadmissible state transitions, without assigning any probabilities when multiple state transitions are possible. This allows considering the minimalistic setting in which Y is a computing machine M with input register o and D-admissible transitions respect the dynamics of M while allowing the value of o to change in arbitrary way.
A complete model prescribes the conditional probabilities for all i. This allows elaborate models with a rich set of degrees of freedom. In particular it will serve to solve the ontology problem since U will be allowed to depend on any degrees of freedom we include the model.
A dynamics model prescribes conditional probabilities of the form . Here is fixed and ρD is a function of variables. In particular D doesn’t define for . That is, D describes time-translation invariant dynamics with unspecified initial conditions. This also allows for elaborate models.
The quasi-Solomonoff probability distribution for {υi} associated with model D and learning time t is defined by
For a constraint model, the conditional probabilities of a Solomonoff distribution given that all transitions are D-admissible for .
For a complete model we consider a stochastic process the transition probabilities of which are governed by D for and by the Solomonoff distribution for .
For a dynamics model, the distribution is constructed as follows. Consider a joint distribution for two universes and where is distributed according to Solomonoff and is distributed according to D with serving as initial conditions. Now take the conditional distribution given that for .
Here t is a parameter representing the amount of evidence in favor of D.
The agent A is now defined by a property q(υt) of υt, for example some part of the state of M. Given a utility function we can consider
where the expectation value is taken with respect to the quasi-Solomonoff distribution. This can be interpreted as an intelligence metric.
We now take the meta-level point of view where Q is regarded as a decision allowing to make the link to decision theory. For example we can think of an AI programmer making the decision of what code to enter into her robot, or natural evolution making a “decision” regarding the DNA of h. sapiens. According to this point of view, IEDT(Q) is the quality of the decision Q according to Evidential Decision Theory. It is possible to define the Causal Decision Theory analogue ICDT(Q) by imaging a “divine intervention” which changes the q(υt) into Q regardless of previous history. We have
where is expectation value with respect to an unconditional quasi-Solomonoff distribution modified by “divine intervention” as above.
These intelligence metrics are prone to the standard criticism of the respective decision theories. IEDT(Q) factors in the indirect effect Q has on U by giving more weight to universes in which Q is likely. ICDT(Q) favors agents that fail on certain Newcomb-like problems. Specifically, suppose A discovers that the universe contains another agent Ω which created two boxes B1 and B2 and placed utility in each before t. Then a CDT-intelligent A would prefer taking B1 + B2 over taking B1 only even if it knows Ω predicted A’s decision and adjusted the utility in the boxes in such a way that taking B1 only would be preferable.
I suggest solving the problem by applying a certain formalization of UDT.
The Solomonoff distribution works by randomly producing a program R which computes3 the transition probabilities of the universe. When we apply the Solomonoff distribution to Y, we can imagine the event space as consisting of pairs (R, {ηi,υ}). Here are uniformly distributed numbers used to determine the occurrence of random events. That is, (R, {ηi,υ}) determines {υi} by the recursive rule
Here PR stands for the conditional probability computed by R and
We define UT as the program receiving (R, {ηi,υ}) as input (it’s infinite but it doesn’t pose a problem) and producing a binary fraction as output, s.t.
UT halts in time T at most on any input
is minimal among all such programs. Here the expectation value is taken w.r.t. the quasi-Solomonoff distirbution.
Further, we define U*T as the program receiving (R, {ηi,υ}, Q) as input and producing a binary fraction as output, s.t.
U*T halts in time T at most on any input
is minimal among all such programs.
Thus UT and U*T are least-squares optimal predictors for U under the constraint of running time T where U*T is provided the additional information q(υt).
Consider
u(Q,T) is the gain in estimated utility obtained by adding the information q(υt)=Q, provided that the underlying physics (R, {ηi,υ}) of Y (which allows predicting q(υt)=Q) is already known. For T >> 0, u(Q,T) approaches 0 (since the additional piece of information that q(υt)=Q can be easily deduced within given computing resources). For T approaching 0, u(Q,T) approaches 0 as well, since there’s not enough time to process the actual input anyway. Hence it is interesting to consider the quantity
It is tempting to interpret u* as an intelligence metric. However it doesn’t seem sensible to compare u*(Q) for different values of Q. Denote T* the value of T for which the maximum above is achieved (). Instead of defining an intelligence metric per se, we can define Q to be a maximally UDT-intelligent agent if we have
This represents the (non-vacuous) self-referential principle that Q is the best agent given the information that Y is the sort of universe that gives rise to Q. The role of T* is to single out the extent of logical uncertainty for which Q cannot be predicted but given Q the consequences for U can be predicted.
Directions for Further Research
It is important to obtain existence results for maximally UDT-intelligent agents for the concept to make sense.
We need a way to consider non-maximally UDT-intelligent agents. For example we can consider the parameter as an intelligence metric but I’m not sure this is the correct choice.
My proposal doesn’t solve the degeneracy and verifiability problems. The latter problem can be tackled by replacing the Solomonoff distribution by something allowing efficient induction. For example we can use the distribution with the minimal Fisher distance to the Solomonoff distribution under some computing resources constraints. However I’m not convinced this would be conceptually correct.
It is possible to consider multi-agent systems. In this settings, the problem transforms from a sort-of optimization problem to a game-theory problem. It should then be possible to study Nash equilibria etc.
It is tempting to look for a way to close the loop between the basic point of view “Q is a program” and the meta point of view “Q is a decision”. This might provide an interesting attack angle on self-improving AI. Of course the AI in this framework is already self-improving if M capable of reprogramming itself.
Footnotes
1The name “ontology problem” is courtesy pengvado
2I was exposed to this work by reading Anja
3It computes them in the weak sense of producing a convergent sequence of lower bounds
- Original Research on Less Wrong by 29 Oct 2012 22:50 UTC; 48 points) (
- Intelligence Metrics with Naturalized Induction using UDT by 21 Feb 2014 12:23 UTC; 24 points) (
- Metatickle Intelligence Metrics and Friendly Utility Functions by 19 Apr 2013 20:43 UTC; 10 points) (
- 18 Feb 2014 19:18 UTC; 6 points) 's comment on Bridge Collapse: Reductionism as Engineering Problem by (
- 31 Dec 2013 7:59 UTC; 3 points) 's comment on Building Phenomenological Bridges by (
- 26 Dec 2016 20:49 UTC; 2 points) 's comment on Suggested solution to The Naturalized Induction Problem by (
- 25 Jan 2014 12:17 UTC; 2 points) 's comment on Can We Do Without Bridge Hypotheses? by (
Some other discussions on formal measures of intelligence, also building on Legg & Hutter’s work:
Hernandez-Orallo & Dowe (2010) Measuring universal intelligence: Towards an anytime intelligence test
Hernandez-Orallo et al. (2011) On more realistic environment distributions for defining, evaluating and developing intelligence
Hibbard (2009) Bias and no free lunch in formal measures of intelligence
Hibbard (2011) Measuring agent intelligence via hierarchies of environments
Goertzel (2010) Toward a Formal Characterization of Real-World General Intelligence
Legg & Veness (2011) An Approximation of the Universal Intelligence Measure
Schaul et al. (2011) Measuring intelligence through games
One issue, which a lot of papers bring up, is that the way we define the distribution of environments in which to test the agent has a major impact on its measured intelligence. While sufficiently intelligent agents will perform well in any environment or distribution of environments, that’s probably of not much help since even humans don’t seem to be at that level of intelligence: we’ve evolved to deal with the kinds of environments that have historically existed on our planet, not with arbitrary ones. Legg & Veness discuss this issue:
Which is why a lot of the above papers focus on trying to define useful environmental distributions.
Thx for the references! I was familiar with Hernandez-Orallo et al. (2011) and Goertzel (2010).
However, it seems that none of these papers tackles the Duality problem.
Regarding environmental distributions, I think this problem is solved rather elegantly in my approach by the quasi-Solomonoff distribution, which singles out environments compatible with the tentative model D. Essentially it is the Solomonoff prior updated by a period of observations during which D-behavior was seen.
Regarding the choice of a reference machine, its role asymptotically vanishes in the tail of the Solomonoff distribution. The quasi-Solomonoff distribution samples the tail of the Solomonoff distribution by design, the more so the more complex is D.
In applications it seems to be a good idea to use D as complex as possible (i.e. put in as much as possible information about the universe) while using a reference machine as simple as possible. In fact I would use lambda-calculus rather than a Turing machine. This is because the simpler the reference machine the closer the relation between the Solomonoff distirbution and Occam’s razor. If we assume that our intuitive grasp of simplicity is approximately correct then using a complex reference machine doesn’t make sense.