Other people imagine something like a neural net containing more ‘neurons’ than the human brain—a device which is born with little more hardwired programming than the general guidance...
That’s not what an artificial neural net actually is. When training your ANN, you give it an input and tell it what the output should be. Then, using a method called backpropagation, you tell it to adjust the weights and activation thresholds of each neuron object until it can match the output. So you’re not just telling it to learn, you’re telling it what the problem is and what the answer should be, then let it find its way to the solution. Then, you apply what it learned on real-world problems.
Again, those other people you mention seem to think that a lot more is going on in an AI system than is actually going on.
That’s not what an artificial neural net actually is. …
Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net. Clearly, a neural net by itself doesn’t act autonomously—to get anything approaching ‘intelligence’ you will need to at least add some feedback loops beyond simple backpropagation.
Again, those other people you mention seem to think that a lot more is going on in an AI system than is actually going on.
More will go on in a future superhuman AI than goes on in any present-day toy AI. Well, yes, those other people I mention do seem to think that. But they are not indulging in any kind of mysticism. Only in the kinds of conceptual extrapolation which took place, for example, in going from simple combinational logic circuitry to the instruction fetch-execute cycle of a von Neumann computer architecture.
Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net.
No, actually I think the tutorial was necessary, especially since what you’re basically saying is that something like a large enough neural net will no longer function by the rules of an ANN. If it doesn’t, how does it learn? It would simply spit out random outputs without having some sort of direct guidance.
More will go on in a future superhuman AI than goes on in any present-day toy AI.
And again I’m trying to figure out what the “superhuman” part will consist of. I keep getting answers like “it will be faster than us” or “it’ll make correct dicisons faster”, and once again point out that computers already do that on a wide variety of specific tasks which is why we use them...
what you’re basically saying is that something like a large enough neural net will no longer function by the rules of an ANN.
Am I really being that unclear? Something containing so many and such large embedded neural nets so that the rest of its circuitry is small by comparison. But that extra circuitry does mean that the whole machine indeed no longer functions by the rules of an ANN. Just as my desktop computer no longer functions by the rules of a dRAM.
And again I’m trying to figure out what the “superhuman” part will consist of.
And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better. Play chess, write poetry, learn to speak Chinese, design computers, prove Fermat’s Last Theorem. The whole human repertoire.
Sure, machines already do some of those things. Many people (I am not one of them) think that such an AI, doing every last one of those things at superhuman speed, would be transformative. It is at least conceivable that they are right.
Just as my desktop computer no longer functions by the rules of a dRAM.
It never really did. DRAM is just a way to keep bits in memory for processing. What’s going on under the hood of any computer hasn’t changed at all. It’s just grown vastly more complex and allowed us to do much more intricate and impressive things with the same basic ideas. The first computer ever built and today’s machines function by the same rules, it’s just that the latter is given the tools to do so much more with them.
And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better.
But machines already do most of the things humans do faster and better except for being creative and pattern recognition. Does it mean that the first AI will be superhuman by default as soon as it encompasses the whole human realm of abilities?
Many people think that such an AI, doing every last one of those things at superhuman speed, would be transformative.
At the very least it would be informative and keep philosophers marinating on the whole “what does it mean to be human” thing.
By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.
I think that is more conventional. Unless otherwise specified, to be “super” you have to be much better at most of the things you are supposed to be “super” at.
I still have a lot of questions about detail but I’m starting to see what I was after: consistent, objective definitions I can work with and relate to my experience with computers and AI.
To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans, be implemented in a way that humans (and therefore the AI) can understand well enough to improve on, and have access to the details of this implementation.
To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans
It could start improving (in software) from a state where it’s much worse than humans in most areas of human capability, if it’s designed specifically for ability to self-improve in an open-ended way.
Agreed. I meant to emphasize the importance of the AI having the ability to effectively reflect on its own implementation details. An AI that is as smart as humans but doesn’t understand how it works is not likely to FOOM.
It might indeed. 25 years of human nursing, diapering, potty training, educating, drug rehabilitating, and more educating gets you a competent human researcher about 1 time in 40, so artificial researchers are likely to be much cheaper and quicker to produce. But I sometimes wonder just how much of human innovation stems from the fact that not all human researchers have had exactly the same education.
If machine researchers are anything like phones or PCs, there will be millions of identical clones—but also substantial variation. Not just variation caused by different upbringings and histories, but variation caused by different architectural design.
By contrast humans are mostly all the same—due to being built using much the same recipe inherited from a recent common ancestor. We aren’t built for doing research—whereas they probably will be. They will likely be running rings around us soon enough.
That’s not what an artificial neural net actually is. When training your ANN, you give it an input and tell it what the output should be. Then, using a method called backpropagation, you tell it to adjust the weights and activation thresholds of each neuron object until it can match the output. So you’re not just telling it to learn, you’re telling it what the problem is and what the answer should be, then let it find its way to the solution. Then, you apply what it learned on real-world problems.
Again, those other people you mention seem to think that a lot more is going on in an AI system than is actually going on.
Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net. Clearly, a neural net by itself doesn’t act autonomously—to get anything approaching ‘intelligence’ you will need to at least add some feedback loops beyond simple backpropagation.
More will go on in a future superhuman AI than goes on in any present-day toy AI. Well, yes, those other people I mention do seem to think that. But they are not indulging in any kind of mysticism. Only in the kinds of conceptual extrapolation which took place, for example, in going from simple combinational logic circuitry to the instruction fetch-execute cycle of a von Neumann computer architecture.
No, actually I think the tutorial was necessary, especially since what you’re basically saying is that something like a large enough neural net will no longer function by the rules of an ANN. If it doesn’t, how does it learn? It would simply spit out random outputs without having some sort of direct guidance.
And again I’m trying to figure out what the “superhuman” part will consist of. I keep getting answers like “it will be faster than us” or “it’ll make correct dicisons faster”, and once again point out that computers already do that on a wide variety of specific tasks which is why we use them...
Am I really being that unclear? Something containing so many and such large embedded neural nets so that the rest of its circuitry is small by comparison. But that extra circuitry does mean that the whole machine indeed no longer functions by the rules of an ANN. Just as my desktop computer no longer functions by the rules of a dRAM.
And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better. Play chess, write poetry, learn to speak Chinese, design computers, prove Fermat’s Last Theorem. The whole human repertoire.
Sure, machines already do some of those things. Many people (I am not one of them) think that such an AI, doing every last one of those things at superhuman speed, would be transformative. It is at least conceivable that they are right.
It never really did. DRAM is just a way to keep bits in memory for processing. What’s going on under the hood of any computer hasn’t changed at all. It’s just grown vastly more complex and allowed us to do much more intricate and impressive things with the same basic ideas. The first computer ever built and today’s machines function by the same rules, it’s just that the latter is given the tools to do so much more with them.
But machines already do most of the things humans do faster and better except for being creative and pattern recognition. Does it mean that the first AI will be superhuman by default as soon as it encompasses the whole human realm of abilities?
At the very least it would be informative and keep philosophers marinating on the whole “what does it mean to be human” thing.
Yes. As long as it does everything roughly as well as a human and some things much better.
Bostrom has:
I think that is more conventional. Unless otherwise specified, to be “super” you have to be much better at most of the things you are supposed to be “super” at.
Sounds like a logical conclusion to me...
I still have a lot of questions about detail but I’m starting to see what I was after: consistent, objective definitions I can work with and relate to my experience with computers and AI.
To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans, be implemented in a way that humans (and therefore the AI) can understand well enough to improve on, and have access to the details of this implementation.
It could start improving (in software) from a state where it’s much worse than humans in most areas of human capability, if it’s designed specifically for ability to self-improve in an open-ended way.
Agreed. I meant to emphasize the importance of the AI having the ability to effectively reflect on its own implementation details. An AI that is as smart as humans but doesn’t understand how it works is not likely to FOOM.
The ability to duplicate adult researchers quickly and cheaply might accelerate the pace of research quite a bit, though.
It might indeed. 25 years of human nursing, diapering, potty training, educating, drug rehabilitating, and more educating gets you a competent human researcher about 1 time in 40, so artificial researchers are likely to be much cheaper and quicker to produce. But I sometimes wonder just how much of human innovation stems from the fact that not all human researchers have had exactly the same education.
If machine researchers are anything like phones or PCs, there will be millions of identical clones—but also substantial variation. Not just variation caused by different upbringings and histories, but variation caused by different architectural design.
By contrast humans are mostly all the same—due to being built using much the same recipe inherited from a recent common ancestor. We aren’t built for doing research—whereas they probably will be. They will likely be running rings around us soon enough.