If you reprogram the paperclipper to value something other than paperclips, then you have a different program. The original one cannot value anything except paperclips.
Second, the idea that a paperclipper can “solve problems, speak language etc.” is simply assuming what you should be proving. The point of the wand is that something that is limited to a single goal does not do those things, and I do not expect anything limited to the goal of paperclips to do such things, even if they would serve paperclips.
I understand how word vectors work, and no, they are not what I am talking about.
“That’s just language.” Yes, if you know how to use language, you are intelligent. Currently we have no AI remotely close to actually being able to use language, as opposed to briefly imitating the use of language.
It’s possible to construct a paperclipper in theory. AIXI-tl is basically a paperclipper. It’s goal is not paperclips but maximizing a reward signal, which can come from anything (perhaps a paperclip recognizer...) AIXI-tl is very inefficient, but it’s a proof of concept that paperclipers are possible to construct. AIXI-tl is fully capable of speaking, solving problems, anything that it predicts will lead to more reward.
A real AI would be much more efficient approximation of AIXI. Perhaps something like modern neural nets, that can predict what actions will lead to reward. Probably something more complicated. But it’s definitely possible to construct paperclippers that only care about maximizing some arbitrary reward. The idea that just having the goal of getting paperclips would somehow make it incapable of doing anything else, is just absurd.
As for your hypothesis of what intelligence is, I find it incredibly unconvincing. It’s true I don’t necessarily have a better hypothesis. Because no one does. No one knows how the brain works. But just asserting a vague hypothesis like doesn’t help anyone unless it actually explains something or helps us build better models of intelligence. I don’t think it explains anything. Its definitely not specific enough to build an actual model out of.
But really it’s irrelevant to this discussion. Even if you are correct, it doesn’t say anything about AI progress. In fact if you are right, it could mean AI is even sooner. Because if it’s correct, it means AI researchers just need to figure out that one idea, to suddenly make intelligent AIs. If we are only one breakthrough like that away from AGI, we are very close indeed.
I did not say paperclippers are impossible in principle. I stated earlier that the orthogonality thesis may be true in principle, but it is false in practice. As you said, AIXI-tl is very inefficient. Practical AIs will not be like that, and they will not be limited to one rigid goal like that.
And even if you find my theory of intelligence unconvincing, one that implies that evolution is intelligent is even less convincing, since it does not respect what people actually mean by the word.
″ Because if it’s correct, it means AI researchers just need to figure out that one idea, to suddenly make intelligent AIs.” That would be true, if it were easy to program that kind of generalization. Currently that seems to be very difficult, and as you correctly say, no one knows how to do it.
AIXI-tl is very inefficient. Practical AIs will not be like that, and they will not be limited to one rigid goal like that.
Your second claim doesn’t follow from the first. Practical AIs will of course be different. But the basic structure of AIXI, reinforcement learning, is agnostic to the model used. It just requires some algorithm to do learning/prediction. As prediction algorithms get better and better, they will still suffer the same problems as AIXI. Unless you are proposing some totally different model of AI than reinforcement learning, that somehow doesn’t suffer from these problems.
And even if you find my theory of intelligence unconvincing, one that implies that evolution is intelligent is even less convincing, since it does not respect what people actually mean by the word.
Now we are debating definitions, which is not productive.
Evolution is not typically thought of as intelligent because it’s not an agent. It doesn’t exist in an environment, make observations, and adjust it’s model of the world, etc. I accept that evolution is not an agent. But that doesn’t matter.
There is only one problem that we really care about. Optimization. This is the only thing that really matters. The advantage humans have in this world is our ability to solve problems and develop technology. The risk and benefit of superintelligent AI comes entirely from its potential to solve problems and engineer technologies better than humans.
And that’s exactly what evolution does. It’s an algorithm that can solve problems and design very sophisticated and efficient machines. It does the thing we care about, despite not being an agent. Whether it meets the definition of “intelligence” or not, is really irrelevant. All that matters is if it’s an algorithm that can solve the types of problems we care about. There is no reason that solving problems or designing machines should require an algorithm to be an agent.
“There is only one problem that we really care about. Optimization.” That may be what you care about, but it is not what I care about, and it was not what I was talking about, which is intelligence. You cannot argue that we only care about optimization, and therefore intelligence is optimization, since by that argument dogs and cats are optimization, and blue and green are optimization, and everything is optimization, since otherwise we would be “debating definitions, which is not productive”. But that is obvious nonsense.
In any case, it is plain that most of the human ability to accomplish things comes from the use of language, as is evident by the lack of accomplishment by normal human beings when they are not taught language. That is why I said that knowing language is in fact a sufficient test of intelligence. That is also why when AI is actually programmed, people will do it by trying to get something to understand language, and that will in fact result in the kind of AI that I was talking about, namely one that aims at vague goals that can change from day to day, not at paperclips. And this has nothing to do with any “homunculus.” Rocks don’t have any special goal like paperclips when they fall, or when they hit things, or when they bounce off. They just do what they do, and that’s that. The same is true of human beings, and sometimes that means trying to have kids, and sometimes it means trying to help people, and sometimes it means trying to have a nice day. That is seeking different goals at different times, just as a rock does different things depending on its current situation. AIs will be the same.
since by that argument dogs and cats are optimization, and blue and green are optimization, and everything is optimization
I have no idea what you are talking about. Optimization isn’t that vague of a word, and I tried to give examples of what I meant by it. The ability to solve problems and design technologies. Dogs and cats can’t design technology. Blue and green can’t design technology. Call it what you want, but to me that’s what intelligence is.
And that’s all that really matters about intelligence, is it’s ability to do that. If you gave me a computer program that could solve arbitrary optimization problems, who cares if it can’t speak language? Who cares if it isn’t an agent? It would be enormously powerful and useful.
That is also why when AI is actually programmed, people will do it by trying to get something to understand language, and that will in fact result in the kind of AI that I was talking about, namely one that aims at vague goals that can change from day to day, not at paperclips.
Again this claim doesn’t follow from your premise at all. AIs will be programmed to understand language… therefore they won’t have goals? What?
Humans definitely have goals. We have messy goals. Nothing explicit like maximizing paperclips, but a hodge podge of goals that evolution selected for, like finding food, getting sex, getting social status, taking care of children, etc. Humans are also more reinforcement learners than pure goal maximizers, but it’s the same principle.
What I am saying that being enormously powerful and useful does not determine the meaning of a word. Yes, something that optimizes can be enormously useful. That doesn’t make it intelligent, just like it doesn’t make it blue or green. And for the same reason: neither “intelligent” nor “blue” means “optimizing.” And your case of evolution proves that; evolution is not intelligent, even though it was enormously useful.
“This claim doesn’t follow from your premise at all.” Not as a logical deduction, but in the sense that if you pay attention to what I was talking about, you can see that it would be true. For example, precisely because they have general knowledge, human beings can pursue practically any goal, whenever something or someone happens to persuade them that “this is good.” AIs will have general knowledge, and therefore they will be open to pursuing almost any goal, in the same way and for the same reasons.
If you reprogram the paperclipper to value something other than paperclips, then you have a different program. The original one cannot value anything except paperclips.
Second, the idea that a paperclipper can “solve problems, speak language etc.” is simply assuming what you should be proving. The point of the wand is that something that is limited to a single goal does not do those things, and I do not expect anything limited to the goal of paperclips to do such things, even if they would serve paperclips.
I understand how word vectors work, and no, they are not what I am talking about.
“That’s just language.” Yes, if you know how to use language, you are intelligent. Currently we have no AI remotely close to actually being able to use language, as opposed to briefly imitating the use of language.
It’s possible to construct a paperclipper in theory. AIXI-tl is basically a paperclipper. It’s goal is not paperclips but maximizing a reward signal, which can come from anything (perhaps a paperclip recognizer...) AIXI-tl is very inefficient, but it’s a proof of concept that paperclipers are possible to construct. AIXI-tl is fully capable of speaking, solving problems, anything that it predicts will lead to more reward.
A real AI would be much more efficient approximation of AIXI. Perhaps something like modern neural nets, that can predict what actions will lead to reward. Probably something more complicated. But it’s definitely possible to construct paperclippers that only care about maximizing some arbitrary reward. The idea that just having the goal of getting paperclips would somehow make it incapable of doing anything else, is just absurd.
As for your hypothesis of what intelligence is, I find it incredibly unconvincing. It’s true I don’t necessarily have a better hypothesis. Because no one does. No one knows how the brain works. But just asserting a vague hypothesis like doesn’t help anyone unless it actually explains something or helps us build better models of intelligence. I don’t think it explains anything. Its definitely not specific enough to build an actual model out of.
But really it’s irrelevant to this discussion. Even if you are correct, it doesn’t say anything about AI progress. In fact if you are right, it could mean AI is even sooner. Because if it’s correct, it means AI researchers just need to figure out that one idea, to suddenly make intelligent AIs. If we are only one breakthrough like that away from AGI, we are very close indeed.
I did not say paperclippers are impossible in principle. I stated earlier that the orthogonality thesis may be true in principle, but it is false in practice. As you said, AIXI-tl is very inefficient. Practical AIs will not be like that, and they will not be limited to one rigid goal like that.
And even if you find my theory of intelligence unconvincing, one that implies that evolution is intelligent is even less convincing, since it does not respect what people actually mean by the word.
″ Because if it’s correct, it means AI researchers just need to figure out that one idea, to suddenly make intelligent AIs.” That would be true, if it were easy to program that kind of generalization. Currently that seems to be very difficult, and as you correctly say, no one knows how to do it.
Your second claim doesn’t follow from the first. Practical AIs will of course be different. But the basic structure of AIXI, reinforcement learning, is agnostic to the model used. It just requires some algorithm to do learning/prediction. As prediction algorithms get better and better, they will still suffer the same problems as AIXI. Unless you are proposing some totally different model of AI than reinforcement learning, that somehow doesn’t suffer from these problems.
Now we are debating definitions, which is not productive.
Evolution is not typically thought of as intelligent because it’s not an agent. It doesn’t exist in an environment, make observations, and adjust it’s model of the world, etc. I accept that evolution is not an agent. But that doesn’t matter.
There is only one problem that we really care about. Optimization. This is the only thing that really matters. The advantage humans have in this world is our ability to solve problems and develop technology. The risk and benefit of superintelligent AI comes entirely from its potential to solve problems and engineer technologies better than humans.
And that’s exactly what evolution does. It’s an algorithm that can solve problems and design very sophisticated and efficient machines. It does the thing we care about, despite not being an agent. Whether it meets the definition of “intelligence” or not, is really irrelevant. All that matters is if it’s an algorithm that can solve the types of problems we care about. There is no reason that solving problems or designing machines should require an algorithm to be an agent.
“There is only one problem that we really care about. Optimization.” That may be what you care about, but it is not what I care about, and it was not what I was talking about, which is intelligence. You cannot argue that we only care about optimization, and therefore intelligence is optimization, since by that argument dogs and cats are optimization, and blue and green are optimization, and everything is optimization, since otherwise we would be “debating definitions, which is not productive”. But that is obvious nonsense.
In any case, it is plain that most of the human ability to accomplish things comes from the use of language, as is evident by the lack of accomplishment by normal human beings when they are not taught language. That is why I said that knowing language is in fact a sufficient test of intelligence. That is also why when AI is actually programmed, people will do it by trying to get something to understand language, and that will in fact result in the kind of AI that I was talking about, namely one that aims at vague goals that can change from day to day, not at paperclips. And this has nothing to do with any “homunculus.” Rocks don’t have any special goal like paperclips when they fall, or when they hit things, or when they bounce off. They just do what they do, and that’s that. The same is true of human beings, and sometimes that means trying to have kids, and sometimes it means trying to help people, and sometimes it means trying to have a nice day. That is seeking different goals at different times, just as a rock does different things depending on its current situation. AIs will be the same.
I have no idea what you are talking about. Optimization isn’t that vague of a word, and I tried to give examples of what I meant by it. The ability to solve problems and design technologies. Dogs and cats can’t design technology. Blue and green can’t design technology. Call it what you want, but to me that’s what intelligence is.
And that’s all that really matters about intelligence, is it’s ability to do that. If you gave me a computer program that could solve arbitrary optimization problems, who cares if it can’t speak language? Who cares if it isn’t an agent? It would be enormously powerful and useful.
Again this claim doesn’t follow from your premise at all. AIs will be programmed to understand language… therefore they won’t have goals? What?
Humans definitely have goals. We have messy goals. Nothing explicit like maximizing paperclips, but a hodge podge of goals that evolution selected for, like finding food, getting sex, getting social status, taking care of children, etc. Humans are also more reinforcement learners than pure goal maximizers, but it’s the same principle.
What I am saying that being enormously powerful and useful does not determine the meaning of a word. Yes, something that optimizes can be enormously useful. That doesn’t make it intelligent, just like it doesn’t make it blue or green. And for the same reason: neither “intelligent” nor “blue” means “optimizing.” And your case of evolution proves that; evolution is not intelligent, even though it was enormously useful.
“This claim doesn’t follow from your premise at all.” Not as a logical deduction, but in the sense that if you pay attention to what I was talking about, you can see that it would be true. For example, precisely because they have general knowledge, human beings can pursue practically any goal, whenever something or someone happens to persuade them that “this is good.” AIs will have general knowledge, and therefore they will be open to pursuing almost any goal, in the same way and for the same reasons.