Describing X as “Y, together with the difference between X and Y” is a tautology. Drawing the conclusion that X is “really” a sort of Y already, and the difference is “just” a matter of engineering development is no more than inspirational fluff. Dividing problems into subproblems is all very well, but not when one of the subproblems amounts to the whole problem.
The particular instance “here’s a completely crappy attempt at making an AGI and all we have to do is scale it up” has been a repeated theme of AGI research from the beginning. The scaling up has never happened. There is no such thing as a “completely crappy AGI”, only things that aren’t AGI.
I think you underestimate the significance of reducing the AGI problem to the sequence prediction problem. Unlike the former, the latter problem is very well defined, and progress is easily measurable and quantifiable (in terms of efficiency of cross-domain compression). The likelyhood of engineering progress on a problem where success can be quantified seems significantly higher than on something as open ended as “general intelligence”.
It doesn’t “reduce” anything, not in reductionism sense anyway. If you are to take that formula and apply the yet unspecified ultra powerful mathematics package to it—that’s what you need to run it on planet worth of computers—it’s this mathematics package that has to be extremely intelligent and ridiculously superhuman, before the resulting AI is even a chimp. It’s this mathematics package that has to learn tricks and read books, that has to be able to do something as simple as making use of a theorem it encountered on input.
The mathematics package doesn’t have to do anything “clever” to build a highly clever sequence predictor. It just has to be efficient in terms of computing time and training data necessary to learn correct hypotheses.
So nshepperd is quite correct: MC-AIXI is a ridiculously inefficient sequence predictor and action selector, with major visible flaws, but reducing “general intelligence” to “maximizing a utility function over world-states via sequence prediction in an active environment” is a Big Deal.
Multitude of AIs have been following what you think “AIXI” model is—select predictors that work, use them—long before anyone bothered to formulate it as a brute force loop (AIXI).
I think you, like most people over here, have a completely inverted view with regards to the difficulty of different breakthroughs. There is a point where the AI uses hierarchical models to deal with environment of greater complexity than the AI itself; getting there is fundamentally difficult, as in, we have no clue how to get there.
It is nice to believe that the word of some hoi polloi is waiting on you for some conceptual breakthrough just roughly within your reach like AIXI is, but that’s just not how it works.
edit: Basically, it’s as if you’re concerned about nuclear powered 20 feet tall robots that shoot nuclear hand grenades. After all, the concept of 20 feet tall robot is the enormous breakthrough, while a sufficiently small nuclear reactor or hand grenade sized nukes are just a matter of “efficiency”.
That’s not what’s interesting about AIXI. “Select predictors that work, then use them” is a fair description of the entire field of machine learning; we’ve learned how to do that fairly well in narrow, well-defined problem domains, but hypothesis generation over poorly structured, arbitrarily complex environments is vastly harder.
The AIXI model is cool because it defines a clever (if totally impractical, and not without pitfalls) way of specifying a single algorithm that can generalize to arbitrary environments without requiring any pipe-fitting work on the part of its developers. That is (to my knowledge) new, and fairly impressive, though it remains a purely theoretical advance: the Monte Carlo approximation eli mentioned may qualify as general AI in some technical sense, but for practical purposes it’s about as smart as throwing transistors at a dart board.
but hypothesis generation over poorly structured, arbitrarily complex environments is vastly harder.
Hypothesis generation over environments that aren’t massively less complex than the machine is vastly harder, and remains vastly harder (albeit there are advances). There’s a subtle problem substitution occurring which steals the thunder you originally reserved for something that actually is vastly harder.
Thing is, many people could at any time write a loop over, say, possible neural network values, and NNs (with feedback) being Turing complete, it’d work roughly the same. Said for loop would be massively, massively less complicated, ingenious, and creative than what those people actually did with their time instead.
The ridiculousness here is that, say, John worked on those ingenious algorithms while keeping in mind that the ideal is the best parameters out of the whole space (which is the abstract concept behind the for loop iteration over those parameters). You couldn’t see what John was doing because he didn’t write it out as a for loop. So James does some work where he—unlike John—has to write out the for loop explicitly, and you go Whoah!
That is (to my knowledge) new
Isn’t. See Solomonoff induction, works of Kolmogorov, etc.
How so?
Describing X as “Y, together with the difference between X and Y” is a tautology. Drawing the conclusion that X is “really” a sort of Y already, and the difference is “just” a matter of engineering development is no more than inspirational fluff. Dividing problems into subproblems is all very well, but not when one of the subproblems amounts to the whole problem.
The particular instance “here’s a completely crappy attempt at making an AGI and all we have to do is scale it up” has been a repeated theme of AGI research from the beginning. The scaling up has never happened. There is no such thing as a “completely crappy AGI”, only things that aren’t AGI.
I think you underestimate the significance of reducing the AGI problem to the sequence prediction problem. Unlike the former, the latter problem is very well defined, and progress is easily measurable and quantifiable (in terms of efficiency of cross-domain compression). The likelyhood of engineering progress on a problem where success can be quantified seems significantly higher than on something as open ended as “general intelligence”.
It doesn’t “reduce” anything, not in reductionism sense anyway. If you are to take that formula and apply the yet unspecified ultra powerful mathematics package to it—that’s what you need to run it on planet worth of computers—it’s this mathematics package that has to be extremely intelligent and ridiculously superhuman, before the resulting AI is even a chimp. It’s this mathematics package that has to learn tricks and read books, that has to be able to do something as simple as making use of a theorem it encountered on input.
The mathematics package doesn’t have to do anything “clever” to build a highly clever sequence predictor. It just has to be efficient in terms of computing time and training data necessary to learn correct hypotheses.
So nshepperd is quite correct: MC-AIXI is a ridiculously inefficient sequence predictor and action selector, with major visible flaws, but reducing “general intelligence” to “maximizing a utility function over world-states via sequence prediction in an active environment” is a Big Deal.
Multitude of AIs have been following what you think “AIXI” model is—select predictors that work, use them—long before anyone bothered to formulate it as a brute force loop (AIXI).
I think you, like most people over here, have a completely inverted view with regards to the difficulty of different breakthroughs. There is a point where the AI uses hierarchical models to deal with environment of greater complexity than the AI itself; getting there is fundamentally difficult, as in, we have no clue how to get there.
It is nice to believe that the word of some hoi polloi is waiting on you for some conceptual breakthrough just roughly within your reach like AIXI is, but that’s just not how it works.
edit: Basically, it’s as if you’re concerned about nuclear powered 20 feet tall robots that shoot nuclear hand grenades. After all, the concept of 20 feet tall robot is the enormous breakthrough, while a sufficiently small nuclear reactor or hand grenade sized nukes are just a matter of “efficiency”.
That’s not what’s interesting about AIXI. “Select predictors that work, then use them” is a fair description of the entire field of machine learning; we’ve learned how to do that fairly well in narrow, well-defined problem domains, but hypothesis generation over poorly structured, arbitrarily complex environments is vastly harder.
The AIXI model is cool because it defines a clever (if totally impractical, and not without pitfalls) way of specifying a single algorithm that can generalize to arbitrary environments without requiring any pipe-fitting work on the part of its developers. That is (to my knowledge) new, and fairly impressive, though it remains a purely theoretical advance: the Monte Carlo approximation eli mentioned may qualify as general AI in some technical sense, but for practical purposes it’s about as smart as throwing transistors at a dart board.
What a wonderful quote!
Hypothesis generation over environments that aren’t massively less complex than the machine is vastly harder, and remains vastly harder (albeit there are advances). There’s a subtle problem substitution occurring which steals the thunder you originally reserved for something that actually is vastly harder.
Thing is, many people could at any time write a loop over, say, possible neural network values, and NNs (with feedback) being Turing complete, it’d work roughly the same. Said for loop would be massively, massively less complicated, ingenious, and creative than what those people actually did with their time instead.
The ridiculousness here is that, say, John worked on those ingenious algorithms while keeping in mind that the ideal is the best parameters out of the whole space (which is the abstract concept behind the for loop iteration over those parameters). You couldn’t see what John was doing because he didn’t write it out as a for loop. So James does some work where he—unlike John—has to write out the for loop explicitly, and you go Whoah!
Isn’t. See Solomonoff induction, works of Kolmogorov, etc.