By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.
I think that is more conventional. Unless otherwise specified, to be “super” you have to be much better at most of the things you are supposed to be “super” at.
I still have a lot of questions about detail but I’m starting to see what I was after: consistent, objective definitions I can work with and relate to my experience with computers and AI.
To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans, be implemented in a way that humans (and therefore the AI) can understand well enough to improve on, and have access to the details of this implementation.
To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans
It could start improving (in software) from a state where it’s much worse than humans in most areas of human capability, if it’s designed specifically for ability to self-improve in an open-ended way.
Agreed. I meant to emphasize the importance of the AI having the ability to effectively reflect on its own implementation details. An AI that is as smart as humans but doesn’t understand how it works is not likely to FOOM.
It might indeed. 25 years of human nursing, diapering, potty training, educating, drug rehabilitating, and more educating gets you a competent human researcher about 1 time in 40, so artificial researchers are likely to be much cheaper and quicker to produce. But I sometimes wonder just how much of human innovation stems from the fact that not all human researchers have had exactly the same education.
If machine researchers are anything like phones or PCs, there will be millions of identical clones—but also substantial variation. Not just variation caused by different upbringings and histories, but variation caused by different architectural design.
By contrast humans are mostly all the same—due to being built using much the same recipe inherited from a recent common ancestor. We aren’t built for doing research—whereas they probably will be. They will likely be running rings around us soon enough.
Yes. As long as it does everything roughly as well as a human and some things much better.
Bostrom has:
I think that is more conventional. Unless otherwise specified, to be “super” you have to be much better at most of the things you are supposed to be “super” at.
Sounds like a logical conclusion to me...
I still have a lot of questions about detail but I’m starting to see what I was after: consistent, objective definitions I can work with and relate to my experience with computers and AI.
To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans, be implemented in a way that humans (and therefore the AI) can understand well enough to improve on, and have access to the details of this implementation.
It could start improving (in software) from a state where it’s much worse than humans in most areas of human capability, if it’s designed specifically for ability to self-improve in an open-ended way.
Agreed. I meant to emphasize the importance of the AI having the ability to effectively reflect on its own implementation details. An AI that is as smart as humans but doesn’t understand how it works is not likely to FOOM.
The ability to duplicate adult researchers quickly and cheaply might accelerate the pace of research quite a bit, though.
It might indeed. 25 years of human nursing, diapering, potty training, educating, drug rehabilitating, and more educating gets you a competent human researcher about 1 time in 40, so artificial researchers are likely to be much cheaper and quicker to produce. But I sometimes wonder just how much of human innovation stems from the fact that not all human researchers have had exactly the same education.
If machine researchers are anything like phones or PCs, there will be millions of identical clones—but also substantial variation. Not just variation caused by different upbringings and histories, but variation caused by different architectural design.
By contrast humans are mostly all the same—due to being built using much the same recipe inherited from a recent common ancestor. We aren’t built for doing research—whereas they probably will be. They will likely be running rings around us soon enough.