incidentally it also being mathematically nonsensical to define an “utility function” without a well defined domain
Which is why reinforcement learning is so popular, yes: it lets you induce a utility function over any environment you’re capable of learning to navigate.
Remember, any machine-learning algorithm has a defined domain of hypotheses it can learn/search within. Given that domain of hypotheses, you can define what a domain of utility functions. Hence, reinforcement learning and preference learning.
The notion that AI is possible is mainstream. The crank stuff such as “I can download an inefficient but functional subhuman AGI from Github. Making it superhuman is just a matter of adding an entire planet’s worth of computing power.”, that’s to computer science as hydrinos are to physics.
You are completely missing the point. If we’re all going to agree that AI is possible, and agree that there’s a completely crappy but genuinely existent example of AGI right now, then it follows that getting AI up to dangerous and/or beneficial levels is a matter of additional engineering progress. My whole point is that we’ve already crossed the equivalent threshold from “Hey, why do photons do that when I fire them at that plate?” to “Oh, there’s a photoelectric effect that looks to be described well by this fancy new theory.” From there it was less than one century between the raw discovery of quantum mechanics and the common usage of everyday technologies based on quantum mechanics.
So you got your academic curiosity that’s doing all on it’s own and using some very general and impractical representations for modelling the world, so what?
The point being: when we can manage to make it sufficiently efficient, and provided we can make it safe, we can set it to work solving just about any problem we consider to be, well, a problem. Given sufficient power and efficiency, it becomes useful for doing stuff people want done, especially stuff people either don’t want to do themselves or have a very hard time doing themselves.
completely crappy but genuinely existent example of AGI, then it follows that getting AI up to dangerous and/or beneficial levels is a matter of additional engineering progress.
Yeah. I can write formally the resurrection of everyone who ever died. Using pretty much exact same approach. A for loop, iterating over every possible ‘brain’ just like the loops that iterate over every action sequence. Because when you have no clue how to do something, you can always write a for loop. I can put it on github, then cranks can download it and say that resurrecting all dead is a matter of additional engineering progress. After all, all dead had once lived, so it got to be possible for them to be alive.
Describing X as “Y, together with the difference between X and Y” is a tautology. Drawing the conclusion that X is “really” a sort of Y already, and the difference is “just” a matter of engineering development is no more than inspirational fluff. Dividing problems into subproblems is all very well, but not when one of the subproblems amounts to the whole problem.
The particular instance “here’s a completely crappy attempt at making an AGI and all we have to do is scale it up” has been a repeated theme of AGI research from the beginning. The scaling up has never happened. There is no such thing as a “completely crappy AGI”, only things that aren’t AGI.
I think you underestimate the significance of reducing the AGI problem to the sequence prediction problem. Unlike the former, the latter problem is very well defined, and progress is easily measurable and quantifiable (in terms of efficiency of cross-domain compression). The likelyhood of engineering progress on a problem where success can be quantified seems significantly higher than on something as open ended as “general intelligence”.
It doesn’t “reduce” anything, not in reductionism sense anyway. If you are to take that formula and apply the yet unspecified ultra powerful mathematics package to it—that’s what you need to run it on planet worth of computers—it’s this mathematics package that has to be extremely intelligent and ridiculously superhuman, before the resulting AI is even a chimp. It’s this mathematics package that has to learn tricks and read books, that has to be able to do something as simple as making use of a theorem it encountered on input.
The mathematics package doesn’t have to do anything “clever” to build a highly clever sequence predictor. It just has to be efficient in terms of computing time and training data necessary to learn correct hypotheses.
So nshepperd is quite correct: MC-AIXI is a ridiculously inefficient sequence predictor and action selector, with major visible flaws, but reducing “general intelligence” to “maximizing a utility function over world-states via sequence prediction in an active environment” is a Big Deal.
Multitude of AIs have been following what you think “AIXI” model is—select predictors that work, use them—long before anyone bothered to formulate it as a brute force loop (AIXI).
I think you, like most people over here, have a completely inverted view with regards to the difficulty of different breakthroughs. There is a point where the AI uses hierarchical models to deal with environment of greater complexity than the AI itself; getting there is fundamentally difficult, as in, we have no clue how to get there.
It is nice to believe that the word of some hoi polloi is waiting on you for some conceptual breakthrough just roughly within your reach like AIXI is, but that’s just not how it works.
edit: Basically, it’s as if you’re concerned about nuclear powered 20 feet tall robots that shoot nuclear hand grenades. After all, the concept of 20 feet tall robot is the enormous breakthrough, while a sufficiently small nuclear reactor or hand grenade sized nukes are just a matter of “efficiency”.
That’s not what’s interesting about AIXI. “Select predictors that work, then use them” is a fair description of the entire field of machine learning; we’ve learned how to do that fairly well in narrow, well-defined problem domains, but hypothesis generation over poorly structured, arbitrarily complex environments is vastly harder.
The AIXI model is cool because it defines a clever (if totally impractical, and not without pitfalls) way of specifying a single algorithm that can generalize to arbitrary environments without requiring any pipe-fitting work on the part of its developers. That is (to my knowledge) new, and fairly impressive, though it remains a purely theoretical advance: the Monte Carlo approximation eli mentioned may qualify as general AI in some technical sense, but for practical purposes it’s about as smart as throwing transistors at a dart board.
but hypothesis generation over poorly structured, arbitrarily complex environments is vastly harder.
Hypothesis generation over environments that aren’t massively less complex than the machine is vastly harder, and remains vastly harder (albeit there are advances). There’s a subtle problem substitution occurring which steals the thunder you originally reserved for something that actually is vastly harder.
Thing is, many people could at any time write a loop over, say, possible neural network values, and NNs (with feedback) being Turing complete, it’d work roughly the same. Said for loop would be massively, massively less complicated, ingenious, and creative than what those people actually did with their time instead.
The ridiculousness here is that, say, John worked on those ingenious algorithms while keeping in mind that the ideal is the best parameters out of the whole space (which is the abstract concept behind the for loop iteration over those parameters). You couldn’t see what John was doing because he didn’t write it out as a for loop. So James does some work where he—unlike John—has to write out the for loop explicitly, and you go Whoah!
That is (to my knowledge) new
Isn’t. See Solomonoff induction, works of Kolmogorov, etc.
Which is why reinforcement learning is so popular, yes
There’s the AIs that solve novel problems along the lines of “design a better airplane wing” or “route a microchip”, and in that field, reinforcement learning of how basic physics works is pretty much one hundred percent irrelevant.
You are completely missing the point. If we’re all going to agree that AI is possible, and agree that there’s a completely crappy but genuinely existent example of AGI right now, then it follows that getting AI up to dangerous and/or beneficial levels is a matter of additional engineering progress
Slow, long term progress, an entire succession of technologies.
Really, you’re just like free energy pseudoscientists. They do all the same things. Ohh, you don’t want to give money for cold fusion? You must be a global warming denialist. That’s the way they think and that’s precisely the way you think about the issue. That you can make literally cold fusion happen with muons in no way shape or form supports what the cold fusion crackpots are doing. Nor does it make cold fusion power plants any more or less a matter of “additional engineering progress” than it would be otherwise.
edit: by same logic, resurrection of the long-dead never-preserved is merely a matter of “additional engineering progress”. Because you can resurrect the dead using this exact same programming construct that AIXI uses to solve problems. It’s called a “for loop”, there’s this for loop in monte carlo aixi. This loop goes over every possible [thing] when you have no clue what so ever how to actually produce [thing] . Thing = action sequence for AIXI and the brain data for resurrection of the dead.
Slow, long term progress, an entire succession of technologies.
Ok, hold on, halt, major question: how closely do you follow the field of machine learning? And computational cognitive science?
Because on the one hand, there is very significant progress being made. On the other hand, when I say “additional engineering progress”, that involves anywhere from years to decades of work before being able to make an agent that can compose an essay, due to the fact that we need classes of learners capable of inducing fairly precise hypotheses over large spaces of possible programs.
What it doesn’t involve is solving intractable, magical-seeming philosophical problems like the nature of “intelligence” or “consciousness” that have always held the field of AI back.
edit: by same logic, resurrection of the long-dead never-preserved is merely a matter of “additional engineering progress”.
No, that’s just plain impossible. Even in the case of cryonic so-called “preservation”, we don’t know what we don’t know about what information we will have needed preserved to restore someone.
Ok, hold on, halt, major question: how closely do you follow the field of machine learning? And computational cognitive science?
(makes the gesture with the hands) Thiiiiis closely. Seriously though, not far enough as to start claiming that mc-AIXI does something interesting when run on a server with root access, or to claim that it would be superhuman if run on all computers we got, or the like.
No, that’s just plain impossible.
Do I need to write code for that and put it on github? Iterates over every possible brain (represented as, say, a Turing machine), runs it for enough timesteps. Requires too much computing power.
Tell me, if I signed up as the PhD student of one among certain major general machine learning researchers, and built out their ideas into agent models, and got one of those running on a server cluster showing interesting proto-human behaviors, might it interest you?
You are completely missing the point. If we’re all going to agree that AI is possible, and agree that there’s a completely crappy but genuinely existent example of AGI right now, then it follows that getting AI up to dangerous and/or beneficial levels is a matter of additional engineering progress
Progress in 1. The sense of incrementally throwing more resources at AIXI, or 2. Forgetting AIXI , and coming up with something more parsimonious?
Because, if it’s 2, there is no other AGI to use as a stating point got incremental progress.
Which is why reinforcement learning is so popular, yes: it lets you induce a utility function over any environment you’re capable of learning to navigate.
Remember, any machine-learning algorithm has a defined domain of hypotheses it can learn/search within. Given that domain of hypotheses, you can define what a domain of utility functions. Hence, reinforcement learning and preference learning.
You are completely missing the point. If we’re all going to agree that AI is possible, and agree that there’s a completely crappy but genuinely existent example of AGI right now, then it follows that getting AI up to dangerous and/or beneficial levels is a matter of additional engineering progress. My whole point is that we’ve already crossed the equivalent threshold from “Hey, why do photons do that when I fire them at that plate?” to “Oh, there’s a photoelectric effect that looks to be described well by this fancy new theory.” From there it was less than one century between the raw discovery of quantum mechanics and the common usage of everyday technologies based on quantum mechanics.
The point being: when we can manage to make it sufficiently efficient, and provided we can make it safe, we can set it to work solving just about any problem we consider to be, well, a problem. Given sufficient power and efficiency, it becomes useful for doing stuff people want done, especially stuff people either don’t want to do themselves or have a very hard time doing themselves.
This is devoid of empirical content.
Yeah. I can write formally the resurrection of everyone who ever died. Using pretty much exact same approach. A for loop, iterating over every possible ‘brain’ just like the loops that iterate over every action sequence. Because when you have no clue how to do something, you can always write a for loop. I can put it on github, then cranks can download it and say that resurrecting all dead is a matter of additional engineering progress. After all, all dead had once lived, so it got to be possible for them to be alive.
How so?
Describing X as “Y, together with the difference between X and Y” is a tautology. Drawing the conclusion that X is “really” a sort of Y already, and the difference is “just” a matter of engineering development is no more than inspirational fluff. Dividing problems into subproblems is all very well, but not when one of the subproblems amounts to the whole problem.
The particular instance “here’s a completely crappy attempt at making an AGI and all we have to do is scale it up” has been a repeated theme of AGI research from the beginning. The scaling up has never happened. There is no such thing as a “completely crappy AGI”, only things that aren’t AGI.
I think you underestimate the significance of reducing the AGI problem to the sequence prediction problem. Unlike the former, the latter problem is very well defined, and progress is easily measurable and quantifiable (in terms of efficiency of cross-domain compression). The likelyhood of engineering progress on a problem where success can be quantified seems significantly higher than on something as open ended as “general intelligence”.
It doesn’t “reduce” anything, not in reductionism sense anyway. If you are to take that formula and apply the yet unspecified ultra powerful mathematics package to it—that’s what you need to run it on planet worth of computers—it’s this mathematics package that has to be extremely intelligent and ridiculously superhuman, before the resulting AI is even a chimp. It’s this mathematics package that has to learn tricks and read books, that has to be able to do something as simple as making use of a theorem it encountered on input.
The mathematics package doesn’t have to do anything “clever” to build a highly clever sequence predictor. It just has to be efficient in terms of computing time and training data necessary to learn correct hypotheses.
So nshepperd is quite correct: MC-AIXI is a ridiculously inefficient sequence predictor and action selector, with major visible flaws, but reducing “general intelligence” to “maximizing a utility function over world-states via sequence prediction in an active environment” is a Big Deal.
Multitude of AIs have been following what you think “AIXI” model is—select predictors that work, use them—long before anyone bothered to formulate it as a brute force loop (AIXI).
I think you, like most people over here, have a completely inverted view with regards to the difficulty of different breakthroughs. There is a point where the AI uses hierarchical models to deal with environment of greater complexity than the AI itself; getting there is fundamentally difficult, as in, we have no clue how to get there.
It is nice to believe that the word of some hoi polloi is waiting on you for some conceptual breakthrough just roughly within your reach like AIXI is, but that’s just not how it works.
edit: Basically, it’s as if you’re concerned about nuclear powered 20 feet tall robots that shoot nuclear hand grenades. After all, the concept of 20 feet tall robot is the enormous breakthrough, while a sufficiently small nuclear reactor or hand grenade sized nukes are just a matter of “efficiency”.
That’s not what’s interesting about AIXI. “Select predictors that work, then use them” is a fair description of the entire field of machine learning; we’ve learned how to do that fairly well in narrow, well-defined problem domains, but hypothesis generation over poorly structured, arbitrarily complex environments is vastly harder.
The AIXI model is cool because it defines a clever (if totally impractical, and not without pitfalls) way of specifying a single algorithm that can generalize to arbitrary environments without requiring any pipe-fitting work on the part of its developers. That is (to my knowledge) new, and fairly impressive, though it remains a purely theoretical advance: the Monte Carlo approximation eli mentioned may qualify as general AI in some technical sense, but for practical purposes it’s about as smart as throwing transistors at a dart board.
What a wonderful quote!
Hypothesis generation over environments that aren’t massively less complex than the machine is vastly harder, and remains vastly harder (albeit there are advances). There’s a subtle problem substitution occurring which steals the thunder you originally reserved for something that actually is vastly harder.
Thing is, many people could at any time write a loop over, say, possible neural network values, and NNs (with feedback) being Turing complete, it’d work roughly the same. Said for loop would be massively, massively less complicated, ingenious, and creative than what those people actually did with their time instead.
The ridiculousness here is that, say, John worked on those ingenious algorithms while keeping in mind that the ideal is the best parameters out of the whole space (which is the abstract concept behind the for loop iteration over those parameters). You couldn’t see what John was doing because he didn’t write it out as a for loop. So James does some work where he—unlike John—has to write out the for loop explicitly, and you go Whoah!
Isn’t. See Solomonoff induction, works of Kolmogorov, etc.
There’s the AIs that solve novel problems along the lines of “design a better airplane wing” or “route a microchip”, and in that field, reinforcement learning of how basic physics works is pretty much one hundred percent irrelevant.
Slow, long term progress, an entire succession of technologies.
Really, you’re just like free energy pseudoscientists. They do all the same things. Ohh, you don’t want to give money for cold fusion? You must be a global warming denialist. That’s the way they think and that’s precisely the way you think about the issue. That you can make literally cold fusion happen with muons in no way shape or form supports what the cold fusion crackpots are doing. Nor does it make cold fusion power plants any more or less a matter of “additional engineering progress” than it would be otherwise.
edit: by same logic, resurrection of the long-dead never-preserved is merely a matter of “additional engineering progress”. Because you can resurrect the dead using this exact same programming construct that AIXI uses to solve problems. It’s called a “for loop”, there’s this for loop in monte carlo aixi. This loop goes over every possible [thing] when you have no clue what so ever how to actually produce [thing] . Thing = action sequence for AIXI and the brain data for resurrection of the dead.
Ok, hold on, halt, major question: how closely do you follow the field of machine learning? And computational cognitive science?
Because on the one hand, there is very significant progress being made. On the other hand, when I say “additional engineering progress”, that involves anywhere from years to decades of work before being able to make an agent that can compose an essay, due to the fact that we need classes of learners capable of inducing fairly precise hypotheses over large spaces of possible programs.
What it doesn’t involve is solving intractable, magical-seeming philosophical problems like the nature of “intelligence” or “consciousness” that have always held the field of AI back.
No, that’s just plain impossible. Even in the case of cryonic so-called “preservation”, we don’t know what we don’t know about what information we will have needed preserved to restore someone.
(makes the gesture with the hands) Thiiiiis closely. Seriously though, not far enough as to start claiming that mc-AIXI does something interesting when run on a server with root access, or to claim that it would be superhuman if run on all computers we got, or the like.
Do I need to write code for that and put it on github? Iterates over every possible brain (represented as, say, a Turing machine), runs it for enough timesteps. Requires too much computing power.
Tell me, if I signed up as the PhD student of one among certain major general machine learning researchers, and built out their ideas into agent models, and got one of those running on a server cluster showing interesting proto-human behaviors, might it interest you?
Progress in 1. The sense of incrementally throwing more resources at AIXI, or 2. Forgetting AIXI , and coming up with something more parsimonious?
Because, if it’s 2, there is no other AGI to use as a stating point got incremental progress.
Is that what they tell you?