I think the widespread opinion is that the human brain has relatively inefficient hardware—I don’t have a cite for this—and, most likely, inefficient software as well (it doesn’t seem like evolution is likely to have optimized general intelligence very well in the relatively short timeframe that we have had it at all, and we don’t seem to be able to efficiently and consistently channel all of our intelligence into rational thought.)
That being the case, if we were going to write an AI that was capable of self-improvement on hardware that was roughly as powerful or more powerful than the human brain (which seems likely) it stands to reason that it could potentially be much faster and more effective than the human brain; and self-improvement should move it quickly in that direction.
I think the widespread opinion is that the human brain has relatively inefficient hardware—I don’t have a cite for this—and, most likely, inefficient software as well (it doesn’t seem like evolution is likely to have optimized general intelligence very well in the relatively short timeframe that we have had it at all, and we don’t seem to be able to efficiently and consistently channel all of our intelligence into rational thought.)
That being the case, if we were going to write an AI that was capable of self-improvement on hardware that was roughly as powerful or more powerful than the human brain (which seems likely) it stands to reason that it could potentially be much faster and more effective than the human brain; and self-improvement should move it quickly in that direction.