To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans, be implemented in a way that humans (and therefore the AI) can understand well enough to improve on, and have access to the details of this implementation.
To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans
It could start improving (in software) from a state where it’s much worse than humans in most areas of human capability, if it’s designed specifically for ability to self-improve in an open-ended way.
Agreed. I meant to emphasize the importance of the AI having the ability to effectively reflect on its own implementation details. An AI that is as smart as humans but doesn’t understand how it works is not likely to FOOM.
It might indeed. 25 years of human nursing, diapering, potty training, educating, drug rehabilitating, and more educating gets you a competent human researcher about 1 time in 40, so artificial researchers are likely to be much cheaper and quicker to produce. But I sometimes wonder just how much of human innovation stems from the fact that not all human researchers have had exactly the same education.
If machine researchers are anything like phones or PCs, there will be millions of identical clones—but also substantial variation. Not just variation caused by different upbringings and histories, but variation caused by different architectural design.
By contrast humans are mostly all the same—due to being built using much the same recipe inherited from a recent common ancestor. We aren’t built for doing research—whereas they probably will be. They will likely be running rings around us soon enough.
To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans, be implemented in a way that humans (and therefore the AI) can understand well enough to improve on, and have access to the details of this implementation.
It could start improving (in software) from a state where it’s much worse than humans in most areas of human capability, if it’s designed specifically for ability to self-improve in an open-ended way.
Agreed. I meant to emphasize the importance of the AI having the ability to effectively reflect on its own implementation details. An AI that is as smart as humans but doesn’t understand how it works is not likely to FOOM.
The ability to duplicate adult researchers quickly and cheaply might accelerate the pace of research quite a bit, though.
It might indeed. 25 years of human nursing, diapering, potty training, educating, drug rehabilitating, and more educating gets you a competent human researcher about 1 time in 40, so artificial researchers are likely to be much cheaper and quicker to produce. But I sometimes wonder just how much of human innovation stems from the fact that not all human researchers have had exactly the same education.
If machine researchers are anything like phones or PCs, there will be millions of identical clones—but also substantial variation. Not just variation caused by different upbringings and histories, but variation caused by different architectural design.
By contrast humans are mostly all the same—due to being built using much the same recipe inherited from a recent common ancestor. We aren’t built for doing research—whereas they probably will be. They will likely be running rings around us soon enough.