Multitude of AIs have been following what you think “AIXI” model is—select predictors that work, use them—long before anyone bothered to formulate it as a brute force loop (AIXI).
I think you, like most people over here, have a completely inverted view with regards to the difficulty of different breakthroughs. There is a point where the AI uses hierarchical models to deal with environment of greater complexity than the AI itself; getting there is fundamentally difficult, as in, we have no clue how to get there.
It is nice to believe that the word of some hoi polloi is waiting on you for some conceptual breakthrough just roughly within your reach like AIXI is, but that’s just not how it works.
edit: Basically, it’s as if you’re concerned about nuclear powered 20 feet tall robots that shoot nuclear hand grenades. After all, the concept of 20 feet tall robot is the enormous breakthrough, while a sufficiently small nuclear reactor or hand grenade sized nukes are just a matter of “efficiency”.
That’s not what’s interesting about AIXI. “Select predictors that work, then use them” is a fair description of the entire field of machine learning; we’ve learned how to do that fairly well in narrow, well-defined problem domains, but hypothesis generation over poorly structured, arbitrarily complex environments is vastly harder.
The AIXI model is cool because it defines a clever (if totally impractical, and not without pitfalls) way of specifying a single algorithm that can generalize to arbitrary environments without requiring any pipe-fitting work on the part of its developers. That is (to my knowledge) new, and fairly impressive, though it remains a purely theoretical advance: the Monte Carlo approximation eli mentioned may qualify as general AI in some technical sense, but for practical purposes it’s about as smart as throwing transistors at a dart board.
but hypothesis generation over poorly structured, arbitrarily complex environments is vastly harder.
Hypothesis generation over environments that aren’t massively less complex than the machine is vastly harder, and remains vastly harder (albeit there are advances). There’s a subtle problem substitution occurring which steals the thunder you originally reserved for something that actually is vastly harder.
Thing is, many people could at any time write a loop over, say, possible neural network values, and NNs (with feedback) being Turing complete, it’d work roughly the same. Said for loop would be massively, massively less complicated, ingenious, and creative than what those people actually did with their time instead.
The ridiculousness here is that, say, John worked on those ingenious algorithms while keeping in mind that the ideal is the best parameters out of the whole space (which is the abstract concept behind the for loop iteration over those parameters). You couldn’t see what John was doing because he didn’t write it out as a for loop. So James does some work where he—unlike John—has to write out the for loop explicitly, and you go Whoah!
That is (to my knowledge) new
Isn’t. See Solomonoff induction, works of Kolmogorov, etc.
Multitude of AIs have been following what you think “AIXI” model is—select predictors that work, use them—long before anyone bothered to formulate it as a brute force loop (AIXI).
I think you, like most people over here, have a completely inverted view with regards to the difficulty of different breakthroughs. There is a point where the AI uses hierarchical models to deal with environment of greater complexity than the AI itself; getting there is fundamentally difficult, as in, we have no clue how to get there.
It is nice to believe that the word of some hoi polloi is waiting on you for some conceptual breakthrough just roughly within your reach like AIXI is, but that’s just not how it works.
edit: Basically, it’s as if you’re concerned about nuclear powered 20 feet tall robots that shoot nuclear hand grenades. After all, the concept of 20 feet tall robot is the enormous breakthrough, while a sufficiently small nuclear reactor or hand grenade sized nukes are just a matter of “efficiency”.
That’s not what’s interesting about AIXI. “Select predictors that work, then use them” is a fair description of the entire field of machine learning; we’ve learned how to do that fairly well in narrow, well-defined problem domains, but hypothesis generation over poorly structured, arbitrarily complex environments is vastly harder.
The AIXI model is cool because it defines a clever (if totally impractical, and not without pitfalls) way of specifying a single algorithm that can generalize to arbitrary environments without requiring any pipe-fitting work on the part of its developers. That is (to my knowledge) new, and fairly impressive, though it remains a purely theoretical advance: the Monte Carlo approximation eli mentioned may qualify as general AI in some technical sense, but for practical purposes it’s about as smart as throwing transistors at a dart board.
What a wonderful quote!
Hypothesis generation over environments that aren’t massively less complex than the machine is vastly harder, and remains vastly harder (albeit there are advances). There’s a subtle problem substitution occurring which steals the thunder you originally reserved for something that actually is vastly harder.
Thing is, many people could at any time write a loop over, say, possible neural network values, and NNs (with feedback) being Turing complete, it’d work roughly the same. Said for loop would be massively, massively less complicated, ingenious, and creative than what those people actually did with their time instead.
The ridiculousness here is that, say, John worked on those ingenious algorithms while keeping in mind that the ideal is the best parameters out of the whole space (which is the abstract concept behind the for loop iteration over those parameters). You couldn’t see what John was doing because he didn’t write it out as a for loop. So James does some work where he—unlike John—has to write out the for loop explicitly, and you go Whoah!
Isn’t. See Solomonoff induction, works of Kolmogorov, etc.