I doubt there is much motivation here for “at least 20 years” except the very fact that it is hard to tell what will happen in 20 years.
I agree with Robin Hanson that we are maybe 5% of the way to general AI. I think 20 years from now the distance we were from AI at this point will be somewhat clearer (because we will be closer, but still very distant.)
Some people think that intelligence should be defined as optimization power. But suppose you had a magic wand that could convert anything it touched into gold. Whenever you touch any solid object with it, it immediately turns to gold. That happens in every environment with every kind of object, and it happens no matter what impediments you try to set up to prevent. You cannot stop it from happening.
In that case, the magic wand has a high degree of optimization power. It is extremely good at converting things it touches into gold, in all possible environments.
But it is perfectly plain that the wand is not intelligent. So that definition of intelligence is mistaken.
I would propose an alternative definition. Intelligence is the ability to engage in abstract thought. You could characterize that as pattern recognition, except that it is the ability to recognize patterns in patterns in patterns, recursively.
The most intelligent AI we have, is not remotely close to that. It can only recognize very particular patterns in very particular sorts of data. Many of Eliezer’s philosophical mistakes concerning AI arise from this fact. He assumes that the AI we have is close to being intelligent, and therefore concludes that intelligent behavior is similar to the behavior of such programs. One example of that was the case of AlphaGo, where Eliezer called it “superintelligent with bugs,” rather than admitting the obvious fact that it was better than Lee Sedol, but not much better, and only at Go, and that it generally played badly when it was in bad positions.
The orthogonality thesis is a similar mistake of that kind; something that is limited to seeking a limited goal like “maximize paperclips” cannot possibly be intelligent, because it cannot recognize the abstract concept of a goal.
But in relation to your original question, the point is that the most intelligent AI we have is incredibly stupid. Unless you believe there is some magical point where there is a sudden change from stupid to intelligent, we are still extremely far off from intelligent machines. And there is no such magical point, as is evident in the behavior of children, which passes imperceptibly from stupid to intelligent.
In that case, the magic wand has a high degree of optimization power. It is extremely good at converting things it touches into gold, in all possible environments. But it is perfectly plain that the wand is not intelligent. So that definition of intelligence is mistaken.
The wand isn’t generally intelligent. Maybe by some stretch of the definition we could sorta say it’s “intelligent” at the task of turning things to gold. But it can’t do any tasks other than turning things into gold. The whole point of AGI is general intelligence. That’s what the G stands for.
Humans are generally intelligent. We can apply our brains to widely different tasks, including many that we weren’t evolved to be good at at all. From playing Go to designing rockets. Evolution is generally intelligent. It can find remarkably good designs for totally arbitrary objective functions.
I think general optimization ability is a perfectly fine definition of intelligence. It includes things like humans and evolution, and some kinds of simple but general AI, but excludes things like animals and domain specific AI. It defines intelligence only by results. If you can optimize an arbitrary goal you are intelligent. It doesn’t try to specify what the internal mechanisms should be, just whether or not they work. And it’s continuous—you can have a degree of very stupid optimizer like evolution, all the way to very good/intelligent ones like humans.
Intelligence is the ability to engage in abstract thought. You could characterize that as pattern recognition, except that it is the ability to recognize patterns in patterns in patterns, recursively.
This definition is really vague. You are just shoving the hard problem of defining intelligence into the hard problem of defining “abstract thought”. I guess the second sentence kind of clarifies that you mean. But it’s not clear at all that humans even meet that definition. Do humans recognize patterns in patterns? I don’t think so. I don’t think we are consciously aware of the vast majority of our pattern recognition ability.
The most intelligent AI we have, is not remotely close to that. It can only recognize very particular patterns in very particular sorts of data.
Not really. Deep neural networks are extraordinary general. The same networks that win at Go could be applied to language translation, driving cars, playing pacman, or recognizing objects in an image.
One example of that was the case of AlphaGo, where Eliezer called it “superintelligent with bugs,”
The exact quote is “superhuman with bugs”. In the context, he was describing the fact that the AI plays far above human level. But still makes some mistakes a human might not make. And it’s not even clear when it makes mistakes, because it is so far above human players and may see things we don’t see, that makes those moves not mistakes.
The orthogonality thesis is a similar mistake of that kind; something that is limited to seeking a limited goal like “maximize paperclips” cannot possibly be intelligent, because it cannot recognize the abstract concept of a goal.
A paperclip maximizer can recognize the concept of a goal. It’s not stupid, it just only cares about paperclips. In the same way humans are programmed by evolution to maximize sex, social status, and similarly arbitrary goals, there is no reason an AI couldn’t be programmed to maximize paperclips. Again, perhaps humans are not intelligent by your definition.
Unless you believe there is some magical point where there is a sudden change from stupid to intelligent, we are still extremely far off from intelligent machines.
Yeah that seems quite obviously true. Just look at the chimpanzees. By some accounts the main difference in human brains is they are just scaled up primate brains − 3 times as large, with a bit more sophisticated language ability. And suddenly you go from creatures that can barely master simple tools and can’t communicate ideas, to creatures capable of technological civilization. 500 million years of evolution refined the mammal brain to get chimps, but only about a million was needed to go from stupid animals to generally intelligent humans.
I don’t see any reason to believe AI progress should be linear. In practice it is clearly not. Areas of AI often has sudden discontinuities or increasing rates of progress. I don’t see any reason why there can’t be a single breakthrough that causes enormous progress, or why even incremental progress must be slow. If evolution can make brains by a bunch of stupid random mutations, surely thousands of intelligent engineers can do so much better on a much shorter time scale.
as is evident in the behavior of children, which passes imperceptibly from stupid to intelligent.
This isn’t a valid analogy at all. Baby humans still have human brains running the same algorithms as adult humans. Their brains are just slightly smaller and have had less time to learn and train. Individual AIs may increase in ability linearly as they grow and learn. But the AI algorithms themselves have no such constraint, someone could theoretically figure out the perfect AI algorithm tomorrow and code it up. There is certainly no law of nature that says AI progress must be slow.
I agree that one problem with the wand is that it is not general. The same thing is true of paperclippers. Just as the wand is limited to converting things to gold, the paperclipper is limited to making paperclips.
But calling evolution intelligent is to speak in metaphors, and that indicates that your definition of intelligence is not a good one if we wish to speak strictly about it.
Humans certainly do recognize patterns in patterns. For example, we recognize that some things are red. That means recognizing a pattern: this red thing is similar to that red thing. Likewise, we recognize that some things are orange. This orange thing is similar to that orange thing. Likewise with other colors. And within those patterns we recognize other similarities, and so people talk about “warm” and “cool” colors, noticing that blue and green are similar to each other in some way, and that orange and red are similar to each other in another way. Likewise we have the concept of “color”, which is noting that all of these patterns are part of a more general pattern. And then we notice that the concepts of “color” and “sound” have an even more general similarity to each other. And so on.
The neural networks you spoke of do nothing like this. Yes you might be able to apply them to those various tasks. But they only generate something like base level patterns, like noticing red and orange. They do not understand patterns of patterns.
I think that saying “only about a million” years was needed for something implies a misunderstanding, at least on some level, of how long a million years is.
I agree that babies have the ability to be intelligent all along. Even when they are babies, they are still recognizing patterns in patterns. None of our AI programs do this at all.
I agree that one problem with the wand is that it is not general. The same thing is true of paperclippers. Just as the wand is limited to converting things to gold, the paperclipper is limited to making paperclips.
The paperclipper can be programmed to value any goal other than paperclips. Paperclips is just it’s current goal. The gold wand can not do anything else.
But even if it’s desire for paperclips is immutable and hard wired, it’s still clearly intelligent. It can solve problems, speak language, design machines, etc, so long as it serves it’s goal of making paperclips.
Humans certainly do recognize patterns in patterns. For example, we recognize that some things are red. That means recognizing a pattern: this red thing is similar to that red thing. Likewise, we recognize that some things are orange.
Artificial neural networks can do the same thing. This is a trivial property of NNs, similar objects produce similar internal representations. Internal representations tend to be semantically meaningful, lookup word vectors.
And within those patterns we recognize other similarities, and so people talk about “warm” and “cool” colors, noticing that blue and green are similar to each other in some way, and that orange and red are similar to each other in another way.
That’s not a “pattern within a pattern”. That’s just a typical pattern, that green and blue appear near “cool” things and that orange and red appear near “hot” things.
Likewise we have the concept of “color”, which is noting that all of these patterns are part of a more general pattern.
That’s just language. The word “color” happens to be useful to communicate with people. I agree that language learning is important for AI. And this is a field that is making rapid progress.
If you reprogram the paperclipper to value something other than paperclips, then you have a different program. The original one cannot value anything except paperclips.
Second, the idea that a paperclipper can “solve problems, speak language etc.” is simply assuming what you should be proving. The point of the wand is that something that is limited to a single goal does not do those things, and I do not expect anything limited to the goal of paperclips to do such things, even if they would serve paperclips.
I understand how word vectors work, and no, they are not what I am talking about.
“That’s just language.” Yes, if you know how to use language, you are intelligent. Currently we have no AI remotely close to actually being able to use language, as opposed to briefly imitating the use of language.
It’s possible to construct a paperclipper in theory. AIXI-tl is basically a paperclipper. It’s goal is not paperclips but maximizing a reward signal, which can come from anything (perhaps a paperclip recognizer...) AIXI-tl is very inefficient, but it’s a proof of concept that paperclipers are possible to construct. AIXI-tl is fully capable of speaking, solving problems, anything that it predicts will lead to more reward.
A real AI would be much more efficient approximation of AIXI. Perhaps something like modern neural nets, that can predict what actions will lead to reward. Probably something more complicated. But it’s definitely possible to construct paperclippers that only care about maximizing some arbitrary reward. The idea that just having the goal of getting paperclips would somehow make it incapable of doing anything else, is just absurd.
As for your hypothesis of what intelligence is, I find it incredibly unconvincing. It’s true I don’t necessarily have a better hypothesis. Because no one does. No one knows how the brain works. But just asserting a vague hypothesis like doesn’t help anyone unless it actually explains something or helps us build better models of intelligence. I don’t think it explains anything. Its definitely not specific enough to build an actual model out of.
But really it’s irrelevant to this discussion. Even if you are correct, it doesn’t say anything about AI progress. In fact if you are right, it could mean AI is even sooner. Because if it’s correct, it means AI researchers just need to figure out that one idea, to suddenly make intelligent AIs. If we are only one breakthrough like that away from AGI, we are very close indeed.
I did not say paperclippers are impossible in principle. I stated earlier that the orthogonality thesis may be true in principle, but it is false in practice. As you said, AIXI-tl is very inefficient. Practical AIs will not be like that, and they will not be limited to one rigid goal like that.
And even if you find my theory of intelligence unconvincing, one that implies that evolution is intelligent is even less convincing, since it does not respect what people actually mean by the word.
″ Because if it’s correct, it means AI researchers just need to figure out that one idea, to suddenly make intelligent AIs.” That would be true, if it were easy to program that kind of generalization. Currently that seems to be very difficult, and as you correctly say, no one knows how to do it.
AIXI-tl is very inefficient. Practical AIs will not be like that, and they will not be limited to one rigid goal like that.
Your second claim doesn’t follow from the first. Practical AIs will of course be different. But the basic structure of AIXI, reinforcement learning, is agnostic to the model used. It just requires some algorithm to do learning/prediction. As prediction algorithms get better and better, they will still suffer the same problems as AIXI. Unless you are proposing some totally different model of AI than reinforcement learning, that somehow doesn’t suffer from these problems.
And even if you find my theory of intelligence unconvincing, one that implies that evolution is intelligent is even less convincing, since it does not respect what people actually mean by the word.
Now we are debating definitions, which is not productive.
Evolution is not typically thought of as intelligent because it’s not an agent. It doesn’t exist in an environment, make observations, and adjust it’s model of the world, etc. I accept that evolution is not an agent. But that doesn’t matter.
There is only one problem that we really care about. Optimization. This is the only thing that really matters. The advantage humans have in this world is our ability to solve problems and develop technology. The risk and benefit of superintelligent AI comes entirely from its potential to solve problems and engineer technologies better than humans.
And that’s exactly what evolution does. It’s an algorithm that can solve problems and design very sophisticated and efficient machines. It does the thing we care about, despite not being an agent. Whether it meets the definition of “intelligence” or not, is really irrelevant. All that matters is if it’s an algorithm that can solve the types of problems we care about. There is no reason that solving problems or designing machines should require an algorithm to be an agent.
“There is only one problem that we really care about. Optimization.” That may be what you care about, but it is not what I care about, and it was not what I was talking about, which is intelligence. You cannot argue that we only care about optimization, and therefore intelligence is optimization, since by that argument dogs and cats are optimization, and blue and green are optimization, and everything is optimization, since otherwise we would be “debating definitions, which is not productive”. But that is obvious nonsense.
In any case, it is plain that most of the human ability to accomplish things comes from the use of language, as is evident by the lack of accomplishment by normal human beings when they are not taught language. That is why I said that knowing language is in fact a sufficient test of intelligence. That is also why when AI is actually programmed, people will do it by trying to get something to understand language, and that will in fact result in the kind of AI that I was talking about, namely one that aims at vague goals that can change from day to day, not at paperclips. And this has nothing to do with any “homunculus.” Rocks don’t have any special goal like paperclips when they fall, or when they hit things, or when they bounce off. They just do what they do, and that’s that. The same is true of human beings, and sometimes that means trying to have kids, and sometimes it means trying to help people, and sometimes it means trying to have a nice day. That is seeking different goals at different times, just as a rock does different things depending on its current situation. AIs will be the same.
since by that argument dogs and cats are optimization, and blue and green are optimization, and everything is optimization
I have no idea what you are talking about. Optimization isn’t that vague of a word, and I tried to give examples of what I meant by it. The ability to solve problems and design technologies. Dogs and cats can’t design technology. Blue and green can’t design technology. Call it what you want, but to me that’s what intelligence is.
And that’s all that really matters about intelligence, is it’s ability to do that. If you gave me a computer program that could solve arbitrary optimization problems, who cares if it can’t speak language? Who cares if it isn’t an agent? It would be enormously powerful and useful.
That is also why when AI is actually programmed, people will do it by trying to get something to understand language, and that will in fact result in the kind of AI that I was talking about, namely one that aims at vague goals that can change from day to day, not at paperclips.
Again this claim doesn’t follow from your premise at all. AIs will be programmed to understand language… therefore they won’t have goals? What?
Humans definitely have goals. We have messy goals. Nothing explicit like maximizing paperclips, but a hodge podge of goals that evolution selected for, like finding food, getting sex, getting social status, taking care of children, etc. Humans are also more reinforcement learners than pure goal maximizers, but it’s the same principle.
What I am saying that being enormously powerful and useful does not determine the meaning of a word. Yes, something that optimizes can be enormously useful. That doesn’t make it intelligent, just like it doesn’t make it blue or green. And for the same reason: neither “intelligent” nor “blue” means “optimizing.” And your case of evolution proves that; evolution is not intelligent, even though it was enormously useful.
“This claim doesn’t follow from your premise at all.” Not as a logical deduction, but in the sense that if you pay attention to what I was talking about, you can see that it would be true. For example, precisely because they have general knowledge, human beings can pursue practically any goal, whenever something or someone happens to persuade them that “this is good.” AIs will have general knowledge, and therefore they will be open to pursuing almost any goal, in the same way and for the same reasons.
Your example of a magic wand doesn’t sound correct to me. By what basis is a Midas touch “optimizing”? It is powerful, yes, but why “optimizing”? A supernova that vaporizes entire planets is powerful, but not optimizing. Seems like a strawman.
Defining intelligence as pattern recognizing is not new. Ben Goertzel has espoused this view for some twenty years, and written a book on the subject I believe. I’m not sure I buy the strong connection with “recognizing the abstract concept of a goal” and such, however. There are plenty of conceivable architectures for which this meta level thinking is incapable of happening, yet nevertheless are capable of producing arbitrarily complex intelligent behavior.
Regarding your last point, your terminology is unnecessarily obscuring. There doesn’t have to be a “magic point”—it could be simply a matter of correct software, but insufficient data or processing power. A human baby is a very stupid device, incapable of doing anything intelligent. But with experiential data and processing time it becomes a very powerful general intelligence over the course of 25 years, without any designer intervention. You bring up this very point yourself which seems to counteract your claim.
Also, the wand is optimizing. The reason is that it doesn’t just do some consistent chemical process that works in some circumstances: it works no matter what particular circumstances it is in. It is just the same as the fact that a paperclipper produces paperclips no matter what circumstance it starts out in.
A supernova on the other hand does not optimize, because it produces different results in different situations.
“There are plenty of conceivable architectures for which this meta level thinking is incapable of happening, yet nevertheless are capable of producing arbitrarily complex intelligent behavior.”
Maybe, but that’s exactly like the orthogonality thesis. The fact that something is possible in principle doesn’t mean there’s any easy way to do it in practice. The easy way to produce arbitrarily complex intelligent behavior in practice is to produce something that can abstract to an arbitrary degree of generality, and that means recognizing abstractions like “goal”, “good,” and so on.
The reason why a human baby becomes intelligent over time is that right from the beginning it has the ability to generalize to pretty much any degree necessary. So I don’t see how that argues against my position. I would expect AIs also to require a process of “growing up” although you might be able to speed that process up so that it takes months rather than years. That is still another reason why the orthogonality thesis is false in practice. AIs that grow up among human beings will grow up with relatively humanlike values (although not exactly human), and the fact that arbitrary values are possible in principle will not make them actual.
The fact that something is possible in principle doesn’t mean there’s any easy way to do it in practice. The easy way to produce arbitrarily complex intelligent behavior in practice is to produce something that can abstract to an arbitrary degree of generality, and that means recognizing abstractions like “goal”, “good,” and so on.
I actually had specific examples in mind, basically all GOFAI approaches to general AI. But in any case this logic doesn’t seem to hold up. You could argue that something needs to HAVE goals in order to be intelligent—I don’t think so, at least not with the technical definition typically given to ‘goals’, but I will grant it for the purpose of discussion. It still doesn’t follow that the thing has to be aware of these goals, or introspective of them. One can have goals without being aware that one has them, or able to represent those goals explicitly. Most human beings fall in this category most of the time, it is sad to say.
I am saying the opposite. Having a goal, in Eliezer’s sense, is contrary to being intelligent. That is, doing everything you do for the sake of one thing and only one thing, and not being capable of doing anything else, is the behavior of an idiotic fanatic, not of an intelligent being.
I said that to be intelligent you need to understand the concept of a goal. That does not mean having one; in fact it means the ability to have many different goals, because your general understanding enables you to see that there is nothing forcing you to pursue one particular goal fanatically.
Do you mean how do you decide which goal to choose? Many different causes. For example if someone tells you that something is good, you might do it, just because you trust them and they told you it was good. They don’t even have to say what goal it will accomplish, other than the fact that it will be something good.
Note that when you do that, you are not trying to accomplish any particular goal, other than “something good,” which is completely general, and could be paperclips, for all you know, if the person who told you that was a paperclipper, and might be something entirely different.
I doubt there is much motivation here for “at least 20 years” except the very fact that it is hard to tell what will happen in 20 years.
I agree with Robin Hanson that we are maybe 5% of the way to general AI. I think 20 years from now the distance we were from AI at this point will be somewhat clearer (because we will be closer, but still very distant.)
On what basis do you say that?
On the basis of thinking long and hard about it.
Some people think that intelligence should be defined as optimization power. But suppose you had a magic wand that could convert anything it touched into gold. Whenever you touch any solid object with it, it immediately turns to gold. That happens in every environment with every kind of object, and it happens no matter what impediments you try to set up to prevent. You cannot stop it from happening.
In that case, the magic wand has a high degree of optimization power. It is extremely good at converting things it touches into gold, in all possible environments.
But it is perfectly plain that the wand is not intelligent. So that definition of intelligence is mistaken.
I would propose an alternative definition. Intelligence is the ability to engage in abstract thought. You could characterize that as pattern recognition, except that it is the ability to recognize patterns in patterns in patterns, recursively.
The most intelligent AI we have, is not remotely close to that. It can only recognize very particular patterns in very particular sorts of data. Many of Eliezer’s philosophical mistakes concerning AI arise from this fact. He assumes that the AI we have is close to being intelligent, and therefore concludes that intelligent behavior is similar to the behavior of such programs. One example of that was the case of AlphaGo, where Eliezer called it “superintelligent with bugs,” rather than admitting the obvious fact that it was better than Lee Sedol, but not much better, and only at Go, and that it generally played badly when it was in bad positions.
The orthogonality thesis is a similar mistake of that kind; something that is limited to seeking a limited goal like “maximize paperclips” cannot possibly be intelligent, because it cannot recognize the abstract concept of a goal.
But in relation to your original question, the point is that the most intelligent AI we have is incredibly stupid. Unless you believe there is some magical point where there is a sudden change from stupid to intelligent, we are still extremely far off from intelligent machines. And there is no such magical point, as is evident in the behavior of children, which passes imperceptibly from stupid to intelligent.
The wand isn’t generally intelligent. Maybe by some stretch of the definition we could sorta say it’s “intelligent” at the task of turning things to gold. But it can’t do any tasks other than turning things into gold. The whole point of AGI is general intelligence. That’s what the G stands for.
Humans are generally intelligent. We can apply our brains to widely different tasks, including many that we weren’t evolved to be good at at all. From playing Go to designing rockets. Evolution is generally intelligent. It can find remarkably good designs for totally arbitrary objective functions.
I think general optimization ability is a perfectly fine definition of intelligence. It includes things like humans and evolution, and some kinds of simple but general AI, but excludes things like animals and domain specific AI. It defines intelligence only by results. If you can optimize an arbitrary goal you are intelligent. It doesn’t try to specify what the internal mechanisms should be, just whether or not they work. And it’s continuous—you can have a degree of very stupid optimizer like evolution, all the way to very good/intelligent ones like humans.
This definition is really vague. You are just shoving the hard problem of defining intelligence into the hard problem of defining “abstract thought”. I guess the second sentence kind of clarifies that you mean. But it’s not clear at all that humans even meet that definition. Do humans recognize patterns in patterns? I don’t think so. I don’t think we are consciously aware of the vast majority of our pattern recognition ability.
Not really. Deep neural networks are extraordinary general. The same networks that win at Go could be applied to language translation, driving cars, playing pacman, or recognizing objects in an image.
The exact quote is “superhuman with bugs”. In the context, he was describing the fact that the AI plays far above human level. But still makes some mistakes a human might not make. And it’s not even clear when it makes mistakes, because it is so far above human players and may see things we don’t see, that makes those moves not mistakes.
A paperclip maximizer can recognize the concept of a goal. It’s not stupid, it just only cares about paperclips. In the same way humans are programmed by evolution to maximize sex, social status, and similarly arbitrary goals, there is no reason an AI couldn’t be programmed to maximize paperclips. Again, perhaps humans are not intelligent by your definition.
Yeah that seems quite obviously true. Just look at the chimpanzees. By some accounts the main difference in human brains is they are just scaled up primate brains − 3 times as large, with a bit more sophisticated language ability. And suddenly you go from creatures that can barely master simple tools and can’t communicate ideas, to creatures capable of technological civilization. 500 million years of evolution refined the mammal brain to get chimps, but only about a million was needed to go from stupid animals to generally intelligent humans.
I don’t see any reason to believe AI progress should be linear. In practice it is clearly not. Areas of AI often has sudden discontinuities or increasing rates of progress. I don’t see any reason why there can’t be a single breakthrough that causes enormous progress, or why even incremental progress must be slow. If evolution can make brains by a bunch of stupid random mutations, surely thousands of intelligent engineers can do so much better on a much shorter time scale.
This isn’t a valid analogy at all. Baby humans still have human brains running the same algorithms as adult humans. Their brains are just slightly smaller and have had less time to learn and train. Individual AIs may increase in ability linearly as they grow and learn. But the AI algorithms themselves have no such constraint, someone could theoretically figure out the perfect AI algorithm tomorrow and code it up. There is certainly no law of nature that says AI progress must be slow.
I agree that one problem with the wand is that it is not general. The same thing is true of paperclippers. Just as the wand is limited to converting things to gold, the paperclipper is limited to making paperclips.
But calling evolution intelligent is to speak in metaphors, and that indicates that your definition of intelligence is not a good one if we wish to speak strictly about it.
Humans certainly do recognize patterns in patterns. For example, we recognize that some things are red. That means recognizing a pattern: this red thing is similar to that red thing. Likewise, we recognize that some things are orange. This orange thing is similar to that orange thing. Likewise with other colors. And within those patterns we recognize other similarities, and so people talk about “warm” and “cool” colors, noticing that blue and green are similar to each other in some way, and that orange and red are similar to each other in another way. Likewise we have the concept of “color”, which is noting that all of these patterns are part of a more general pattern. And then we notice that the concepts of “color” and “sound” have an even more general similarity to each other. And so on.
The neural networks you spoke of do nothing like this. Yes you might be able to apply them to those various tasks. But they only generate something like base level patterns, like noticing red and orange. They do not understand patterns of patterns.
I think that saying “only about a million” years was needed for something implies a misunderstanding, at least on some level, of how long a million years is.
I agree that babies have the ability to be intelligent all along. Even when they are babies, they are still recognizing patterns in patterns. None of our AI programs do this at all.
The paperclipper can be programmed to value any goal other than paperclips. Paperclips is just it’s current goal. The gold wand can not do anything else.
But even if it’s desire for paperclips is immutable and hard wired, it’s still clearly intelligent. It can solve problems, speak language, design machines, etc, so long as it serves it’s goal of making paperclips.
Artificial neural networks can do the same thing. This is a trivial property of NNs, similar objects produce similar internal representations. Internal representations tend to be semantically meaningful, lookup word vectors.
That’s not a “pattern within a pattern”. That’s just a typical pattern, that green and blue appear near “cool” things and that orange and red appear near “hot” things.
That’s just language. The word “color” happens to be useful to communicate with people. I agree that language learning is important for AI. And this is a field that is making rapid progress.
If you reprogram the paperclipper to value something other than paperclips, then you have a different program. The original one cannot value anything except paperclips.
Second, the idea that a paperclipper can “solve problems, speak language etc.” is simply assuming what you should be proving. The point of the wand is that something that is limited to a single goal does not do those things, and I do not expect anything limited to the goal of paperclips to do such things, even if they would serve paperclips.
I understand how word vectors work, and no, they are not what I am talking about.
“That’s just language.” Yes, if you know how to use language, you are intelligent. Currently we have no AI remotely close to actually being able to use language, as opposed to briefly imitating the use of language.
It’s possible to construct a paperclipper in theory. AIXI-tl is basically a paperclipper. It’s goal is not paperclips but maximizing a reward signal, which can come from anything (perhaps a paperclip recognizer...) AIXI-tl is very inefficient, but it’s a proof of concept that paperclipers are possible to construct. AIXI-tl is fully capable of speaking, solving problems, anything that it predicts will lead to more reward.
A real AI would be much more efficient approximation of AIXI. Perhaps something like modern neural nets, that can predict what actions will lead to reward. Probably something more complicated. But it’s definitely possible to construct paperclippers that only care about maximizing some arbitrary reward. The idea that just having the goal of getting paperclips would somehow make it incapable of doing anything else, is just absurd.
As for your hypothesis of what intelligence is, I find it incredibly unconvincing. It’s true I don’t necessarily have a better hypothesis. Because no one does. No one knows how the brain works. But just asserting a vague hypothesis like doesn’t help anyone unless it actually explains something or helps us build better models of intelligence. I don’t think it explains anything. Its definitely not specific enough to build an actual model out of.
But really it’s irrelevant to this discussion. Even if you are correct, it doesn’t say anything about AI progress. In fact if you are right, it could mean AI is even sooner. Because if it’s correct, it means AI researchers just need to figure out that one idea, to suddenly make intelligent AIs. If we are only one breakthrough like that away from AGI, we are very close indeed.
I did not say paperclippers are impossible in principle. I stated earlier that the orthogonality thesis may be true in principle, but it is false in practice. As you said, AIXI-tl is very inefficient. Practical AIs will not be like that, and they will not be limited to one rigid goal like that.
And even if you find my theory of intelligence unconvincing, one that implies that evolution is intelligent is even less convincing, since it does not respect what people actually mean by the word.
″ Because if it’s correct, it means AI researchers just need to figure out that one idea, to suddenly make intelligent AIs.” That would be true, if it were easy to program that kind of generalization. Currently that seems to be very difficult, and as you correctly say, no one knows how to do it.
Your second claim doesn’t follow from the first. Practical AIs will of course be different. But the basic structure of AIXI, reinforcement learning, is agnostic to the model used. It just requires some algorithm to do learning/prediction. As prediction algorithms get better and better, they will still suffer the same problems as AIXI. Unless you are proposing some totally different model of AI than reinforcement learning, that somehow doesn’t suffer from these problems.
Now we are debating definitions, which is not productive.
Evolution is not typically thought of as intelligent because it’s not an agent. It doesn’t exist in an environment, make observations, and adjust it’s model of the world, etc. I accept that evolution is not an agent. But that doesn’t matter.
There is only one problem that we really care about. Optimization. This is the only thing that really matters. The advantage humans have in this world is our ability to solve problems and develop technology. The risk and benefit of superintelligent AI comes entirely from its potential to solve problems and engineer technologies better than humans.
And that’s exactly what evolution does. It’s an algorithm that can solve problems and design very sophisticated and efficient machines. It does the thing we care about, despite not being an agent. Whether it meets the definition of “intelligence” or not, is really irrelevant. All that matters is if it’s an algorithm that can solve the types of problems we care about. There is no reason that solving problems or designing machines should require an algorithm to be an agent.
“There is only one problem that we really care about. Optimization.” That may be what you care about, but it is not what I care about, and it was not what I was talking about, which is intelligence. You cannot argue that we only care about optimization, and therefore intelligence is optimization, since by that argument dogs and cats are optimization, and blue and green are optimization, and everything is optimization, since otherwise we would be “debating definitions, which is not productive”. But that is obvious nonsense.
In any case, it is plain that most of the human ability to accomplish things comes from the use of language, as is evident by the lack of accomplishment by normal human beings when they are not taught language. That is why I said that knowing language is in fact a sufficient test of intelligence. That is also why when AI is actually programmed, people will do it by trying to get something to understand language, and that will in fact result in the kind of AI that I was talking about, namely one that aims at vague goals that can change from day to day, not at paperclips. And this has nothing to do with any “homunculus.” Rocks don’t have any special goal like paperclips when they fall, or when they hit things, or when they bounce off. They just do what they do, and that’s that. The same is true of human beings, and sometimes that means trying to have kids, and sometimes it means trying to help people, and sometimes it means trying to have a nice day. That is seeking different goals at different times, just as a rock does different things depending on its current situation. AIs will be the same.
I have no idea what you are talking about. Optimization isn’t that vague of a word, and I tried to give examples of what I meant by it. The ability to solve problems and design technologies. Dogs and cats can’t design technology. Blue and green can’t design technology. Call it what you want, but to me that’s what intelligence is.
And that’s all that really matters about intelligence, is it’s ability to do that. If you gave me a computer program that could solve arbitrary optimization problems, who cares if it can’t speak language? Who cares if it isn’t an agent? It would be enormously powerful and useful.
Again this claim doesn’t follow from your premise at all. AIs will be programmed to understand language… therefore they won’t have goals? What?
Humans definitely have goals. We have messy goals. Nothing explicit like maximizing paperclips, but a hodge podge of goals that evolution selected for, like finding food, getting sex, getting social status, taking care of children, etc. Humans are also more reinforcement learners than pure goal maximizers, but it’s the same principle.
What I am saying that being enormously powerful and useful does not determine the meaning of a word. Yes, something that optimizes can be enormously useful. That doesn’t make it intelligent, just like it doesn’t make it blue or green. And for the same reason: neither “intelligent” nor “blue” means “optimizing.” And your case of evolution proves that; evolution is not intelligent, even though it was enormously useful.
“This claim doesn’t follow from your premise at all.” Not as a logical deduction, but in the sense that if you pay attention to what I was talking about, you can see that it would be true. For example, precisely because they have general knowledge, human beings can pursue practically any goal, whenever something or someone happens to persuade them that “this is good.” AIs will have general knowledge, and therefore they will be open to pursuing almost any goal, in the same way and for the same reasons.
Your example of a magic wand doesn’t sound correct to me. By what basis is a Midas touch “optimizing”? It is powerful, yes, but why “optimizing”? A supernova that vaporizes entire planets is powerful, but not optimizing. Seems like a strawman.
Defining intelligence as pattern recognizing is not new. Ben Goertzel has espoused this view for some twenty years, and written a book on the subject I believe. I’m not sure I buy the strong connection with “recognizing the abstract concept of a goal” and such, however. There are plenty of conceivable architectures for which this meta level thinking is incapable of happening, yet nevertheless are capable of producing arbitrarily complex intelligent behavior.
Regarding your last point, your terminology is unnecessarily obscuring. There doesn’t have to be a “magic point”—it could be simply a matter of correct software, but insufficient data or processing power. A human baby is a very stupid device, incapable of doing anything intelligent. But with experiential data and processing time it becomes a very powerful general intelligence over the course of 25 years, without any designer intervention. You bring up this very point yourself which seems to counteract your claim.
Also, the wand is optimizing. The reason is that it doesn’t just do some consistent chemical process that works in some circumstances: it works no matter what particular circumstances it is in. It is just the same as the fact that a paperclipper produces paperclips no matter what circumstance it starts out in.
A supernova on the other hand does not optimize, because it produces different results in different situations.
“There are plenty of conceivable architectures for which this meta level thinking is incapable of happening, yet nevertheless are capable of producing arbitrarily complex intelligent behavior.”
Maybe, but that’s exactly like the orthogonality thesis. The fact that something is possible in principle doesn’t mean there’s any easy way to do it in practice. The easy way to produce arbitrarily complex intelligent behavior in practice is to produce something that can abstract to an arbitrary degree of generality, and that means recognizing abstractions like “goal”, “good,” and so on.
The reason why a human baby becomes intelligent over time is that right from the beginning it has the ability to generalize to pretty much any degree necessary. So I don’t see how that argues against my position. I would expect AIs also to require a process of “growing up” although you might be able to speed that process up so that it takes months rather than years. That is still another reason why the orthogonality thesis is false in practice. AIs that grow up among human beings will grow up with relatively humanlike values (although not exactly human), and the fact that arbitrary values are possible in principle will not make them actual.
I actually had specific examples in mind, basically all GOFAI approaches to general AI. But in any case this logic doesn’t seem to hold up. You could argue that something needs to HAVE goals in order to be intelligent—I don’t think so, at least not with the technical definition typically given to ‘goals’, but I will grant it for the purpose of discussion. It still doesn’t follow that the thing has to be aware of these goals, or introspective of them. One can have goals without being aware that one has them, or able to represent those goals explicitly. Most human beings fall in this category most of the time, it is sad to say.
I am saying the opposite. Having a goal, in Eliezer’s sense, is contrary to being intelligent. That is, doing everything you do for the sake of one thing and only one thing, and not being capable of doing anything else, is the behavior of an idiotic fanatic, not of an intelligent being.
I said that to be intelligent you need to understand the concept of a goal. That does not mean having one; in fact it means the ability to have many different goals, because your general understanding enables you to see that there is nothing forcing you to pursue one particular goal fanatically.
Smells like a homunculus. What guides your reasoning about your goals?
Do you mean how do you decide which goal to choose? Many different causes. For example if someone tells you that something is good, you might do it, just because you trust them and they told you it was good. They don’t even have to say what goal it will accomplish, other than the fact that it will be something good.
Note that when you do that, you are not trying to accomplish any particular goal, other than “something good,” which is completely general, and could be paperclips, for all you know, if the person who told you that was a paperclipper, and might be something entirely different.