Absolutely not. If you take another look, I argue that it’s uncessary. You don’t want the machine to do something? Put in a boundry. You don’t have the option to just turn off a lab rat’s desire to search a particular corner of its cage with a press of a button, so all you can do is put in some deterrent. But with a machine, you can just tell it not to do that. For example, this code in Java would mean not to add two even numbers if the method recieves them:
public int add(int a, int b)
{
if ((a % 2) != 0 && (b % 2) != 0)
{
return a + b;
}
return −1;
}
So why do I need to build an elaborate curcuit to “reward” the computer for not adding even numbers? And why would it suddenly decide to override the condition? Just to see why? If I wanted it to experiment, I’d just give it fewer bounds.
Part of the disagreement here seems to arise from disjoint models of what a powerful AI would consist of.
You seem to imagine something like an ordinary computer, which receives its instructions in some high-level imperative language, and then carries them out, making use of a huge library of provably correct algorithms.
Other people imagine something like a neural net containing more ‘neurons’ than the human brain—a device which is born with little more hardwired programming than the general guidance that ‘learning is good’ and ‘hurting people is bad’ together with a high-speed internet connection and the URL for wikipedia. Training such an AI might well be a bit like training your pets.
It is not clear to me which kind of AI will reach a human level of intelligence first. But if I had to bet, I would guess the second. And therein lies the danger.
ETA: But even the first kind of AI can be dangerous, because sooner or later someone is going to issue a command with unforeseen consequences.
Other people imagine something like a neural net containing more ‘neurons’ than the human brain—a device which is born with little more hardwired programming than the general guidance...
That’s not what an artificial neural net actually is. When training your ANN, you give it an input and tell it what the output should be. Then, using a method called backpropagation, you tell it to adjust the weights and activation thresholds of each neuron object until it can match the output. So you’re not just telling it to learn, you’re telling it what the problem is and what the answer should be, then let it find its way to the solution. Then, you apply what it learned on real-world problems.
Again, those other people you mention seem to think that a lot more is going on in an AI system than is actually going on.
That’s not what an artificial neural net actually is. …
Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net. Clearly, a neural net by itself doesn’t act autonomously—to get anything approaching ‘intelligence’ you will need to at least add some feedback loops beyond simple backpropagation.
Again, those other people you mention seem to think that a lot more is going on in an AI system than is actually going on.
More will go on in a future superhuman AI than goes on in any present-day toy AI. Well, yes, those other people I mention do seem to think that. But they are not indulging in any kind of mysticism. Only in the kinds of conceptual extrapolation which took place, for example, in going from simple combinational logic circuitry to the instruction fetch-execute cycle of a von Neumann computer architecture.
Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net.
No, actually I think the tutorial was necessary, especially since what you’re basically saying is that something like a large enough neural net will no longer function by the rules of an ANN. If it doesn’t, how does it learn? It would simply spit out random outputs without having some sort of direct guidance.
More will go on in a future superhuman AI than goes on in any present-day toy AI.
And again I’m trying to figure out what the “superhuman” part will consist of. I keep getting answers like “it will be faster than us” or “it’ll make correct dicisons faster”, and once again point out that computers already do that on a wide variety of specific tasks which is why we use them...
what you’re basically saying is that something like a large enough neural net will no longer function by the rules of an ANN.
Am I really being that unclear? Something containing so many and such large embedded neural nets so that the rest of its circuitry is small by comparison. But that extra circuitry does mean that the whole machine indeed no longer functions by the rules of an ANN. Just as my desktop computer no longer functions by the rules of a dRAM.
And again I’m trying to figure out what the “superhuman” part will consist of.
And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better. Play chess, write poetry, learn to speak Chinese, design computers, prove Fermat’s Last Theorem. The whole human repertoire.
Sure, machines already do some of those things. Many people (I am not one of them) think that such an AI, doing every last one of those things at superhuman speed, would be transformative. It is at least conceivable that they are right.
Just as my desktop computer no longer functions by the rules of a dRAM.
It never really did. DRAM is just a way to keep bits in memory for processing. What’s going on under the hood of any computer hasn’t changed at all. It’s just grown vastly more complex and allowed us to do much more intricate and impressive things with the same basic ideas. The first computer ever built and today’s machines function by the same rules, it’s just that the latter is given the tools to do so much more with them.
And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better.
But machines already do most of the things humans do faster and better except for being creative and pattern recognition. Does it mean that the first AI will be superhuman by default as soon as it encompasses the whole human realm of abilities?
Many people think that such an AI, doing every last one of those things at superhuman speed, would be transformative.
At the very least it would be informative and keep philosophers marinating on the whole “what does it mean to be human” thing.
By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.
I think that is more conventional. Unless otherwise specified, to be “super” you have to be much better at most of the things you are supposed to be “super” at.
I still have a lot of questions about detail but I’m starting to see what I was after: consistent, objective definitions I can work with and relate to my experience with computers and AI.
To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans, be implemented in a way that humans (and therefore the AI) can understand well enough to improve on, and have access to the details of this implementation.
To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans
It could start improving (in software) from a state where it’s much worse than humans in most areas of human capability, if it’s designed specifically for ability to self-improve in an open-ended way.
Agreed. I meant to emphasize the importance of the AI having the ability to effectively reflect on its own implementation details. An AI that is as smart as humans but doesn’t understand how it works is not likely to FOOM.
It might indeed. 25 years of human nursing, diapering, potty training, educating, drug rehabilitating, and more educating gets you a competent human researcher about 1 time in 40, so artificial researchers are likely to be much cheaper and quicker to produce. But I sometimes wonder just how much of human innovation stems from the fact that not all human researchers have had exactly the same education.
If machine researchers are anything like phones or PCs, there will be millions of identical clones—but also substantial variation. Not just variation caused by different upbringings and histories, but variation caused by different architectural design.
By contrast humans are mostly all the same—due to being built using much the same recipe inherited from a recent common ancestor. We aren’t built for doing research—whereas they probably will be. They will likely be running rings around us soon enough.
Thanks for the response. I’ll check out the other techniques; I don’t know much about them.
I didn’t mean that, exactly; I just meant that reinforcement learning is possible. Fish seemed to be implying that it wasn’t.
Absolutely not. If you take another look, I argue that it’s uncessary. You don’t want the machine to do something? Put in a boundry. You don’t have the option to just turn off a lab rat’s desire to search a particular corner of its cage with a press of a button, so all you can do is put in some deterrent. But with a machine, you can just tell it not to do that. For example, this code in Java would mean not to add two even numbers if the method recieves them:
public int add(int a, int b) { if ((a % 2) != 0 && (b % 2) != 0) { return a + b; } return −1; }
So why do I need to build an elaborate curcuit to “reward” the computer for not adding even numbers? And why would it suddenly decide to override the condition? Just to see why? If I wanted it to experiment, I’d just give it fewer bounds.
Part of the disagreement here seems to arise from disjoint models of what a powerful AI would consist of.
You seem to imagine something like an ordinary computer, which receives its instructions in some high-level imperative language, and then carries them out, making use of a huge library of provably correct algorithms.
Other people imagine something like a neural net containing more ‘neurons’ than the human brain—a device which is born with little more hardwired programming than the general guidance that ‘learning is good’ and ‘hurting people is bad’ together with a high-speed internet connection and the URL for wikipedia. Training such an AI might well be a bit like training your pets.
It is not clear to me which kind of AI will reach a human level of intelligence first. But if I had to bet, I would guess the second. And therein lies the danger.
ETA: But even the first kind of AI can be dangerous, because sooner or later someone is going to issue a command with unforeseen consequences.
That’s not what an artificial neural net actually is. When training your ANN, you give it an input and tell it what the output should be. Then, using a method called backpropagation, you tell it to adjust the weights and activation thresholds of each neuron object until it can match the output. So you’re not just telling it to learn, you’re telling it what the problem is and what the answer should be, then let it find its way to the solution. Then, you apply what it learned on real-world problems.
Again, those other people you mention seem to think that a lot more is going on in an AI system than is actually going on.
Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net. Clearly, a neural net by itself doesn’t act autonomously—to get anything approaching ‘intelligence’ you will need to at least add some feedback loops beyond simple backpropagation.
More will go on in a future superhuman AI than goes on in any present-day toy AI. Well, yes, those other people I mention do seem to think that. But they are not indulging in any kind of mysticism. Only in the kinds of conceptual extrapolation which took place, for example, in going from simple combinational logic circuitry to the instruction fetch-execute cycle of a von Neumann computer architecture.
No, actually I think the tutorial was necessary, especially since what you’re basically saying is that something like a large enough neural net will no longer function by the rules of an ANN. If it doesn’t, how does it learn? It would simply spit out random outputs without having some sort of direct guidance.
And again I’m trying to figure out what the “superhuman” part will consist of. I keep getting answers like “it will be faster than us” or “it’ll make correct dicisons faster”, and once again point out that computers already do that on a wide variety of specific tasks which is why we use them...
Am I really being that unclear? Something containing so many and such large embedded neural nets so that the rest of its circuitry is small by comparison. But that extra circuitry does mean that the whole machine indeed no longer functions by the rules of an ANN. Just as my desktop computer no longer functions by the rules of a dRAM.
And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better. Play chess, write poetry, learn to speak Chinese, design computers, prove Fermat’s Last Theorem. The whole human repertoire.
Sure, machines already do some of those things. Many people (I am not one of them) think that such an AI, doing every last one of those things at superhuman speed, would be transformative. It is at least conceivable that they are right.
It never really did. DRAM is just a way to keep bits in memory for processing. What’s going on under the hood of any computer hasn’t changed at all. It’s just grown vastly more complex and allowed us to do much more intricate and impressive things with the same basic ideas. The first computer ever built and today’s machines function by the same rules, it’s just that the latter is given the tools to do so much more with them.
But machines already do most of the things humans do faster and better except for being creative and pattern recognition. Does it mean that the first AI will be superhuman by default as soon as it encompasses the whole human realm of abilities?
At the very least it would be informative and keep philosophers marinating on the whole “what does it mean to be human” thing.
Yes. As long as it does everything roughly as well as a human and some things much better.
Bostrom has:
I think that is more conventional. Unless otherwise specified, to be “super” you have to be much better at most of the things you are supposed to be “super” at.
Sounds like a logical conclusion to me...
I still have a lot of questions about detail but I’m starting to see what I was after: consistent, objective definitions I can work with and relate to my experience with computers and AI.
To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans, be implemented in a way that humans (and therefore the AI) can understand well enough to improve on, and have access to the details of this implementation.
It could start improving (in software) from a state where it’s much worse than humans in most areas of human capability, if it’s designed specifically for ability to self-improve in an open-ended way.
Agreed. I meant to emphasize the importance of the AI having the ability to effectively reflect on its own implementation details. An AI that is as smart as humans but doesn’t understand how it works is not likely to FOOM.
The ability to duplicate adult researchers quickly and cheaply might accelerate the pace of research quite a bit, though.
It might indeed. 25 years of human nursing, diapering, potty training, educating, drug rehabilitating, and more educating gets you a competent human researcher about 1 time in 40, so artificial researchers are likely to be much cheaper and quicker to produce. But I sometimes wonder just how much of human innovation stems from the fact that not all human researchers have had exactly the same education.
If machine researchers are anything like phones or PCs, there will be millions of identical clones—but also substantial variation. Not just variation caused by different upbringings and histories, but variation caused by different architectural design.
By contrast humans are mostly all the same—due to being built using much the same recipe inherited from a recent common ancestor. We aren’t built for doing research—whereas they probably will be. They will likely be running rings around us soon enough.
There’s a big, fat book all about the topic of the difficulties of controlling machines—and it is now available online: Kevin Kelly—Out of Control