This seems to be quite similar to Robin Hanson’s Ubertool argument.
More generally, humans within organizations self-modify using communication and training.
The bottleneck that’s been pointed out to me so far are the bottlenecks related to wetware as a computing platform. But since AGI, as far as I can tell, can’t directly change its hardware through recursive self-modification, I don’t see how that bottleneck puts AGI at an immediate, FOOMy advantage.
The problems with wetware are not that it’s hard to change the hardware—it’s that there is very little that seems to be implemented in modifiable software. We can’t change the algorithm our eyes use to assemble images (this might be useful to avoid autocrorecting typos). We can’t save the stack when an interrupt comes in. We can’t easily process slower in exchange for more working memory.
We have have limits in how much we can self-monitor. Consider writing PHP code which manually generates SQL statements. It would be nice if we could remember to always escape our inputs to avoid SQL injection attacks. And a computer program could self-modify to do so. A human could try, but it is inevitable that they would on occasion forget (see Wordpress’s history of security holes).
We can’t trivially copy our skills—if you need two humans who can understand a codebase, it takes approximately twice as long as it takes for one. If you want some help on a project, you end up spending a ton of time explaining the problem to the next person. You can’t just transfer your state over.
None of these things are “software”, in the sense of being modifiable. And they’re all things that would let self-improvement happen more quickly, and that a computer could change.
I should also mention that an AI with a FPGA could change its hardware. But I think this is a minor point; the flexibility of software is simply vastly higher than the flexibility of brains.
This seems to be quite similar to Robin Hanson’s Ubertool argument.
The problems with wetware are not that it’s hard to change the hardware—it’s that there is very little that seems to be implemented in modifiable software. We can’t change the algorithm our eyes use to assemble images (this might be useful to avoid autocrorecting typos). We can’t save the stack when an interrupt comes in. We can’t easily process slower in exchange for more working memory.
We have have limits in how much we can self-monitor. Consider writing PHP code which manually generates SQL statements. It would be nice if we could remember to always escape our inputs to avoid SQL injection attacks. And a computer program could self-modify to do so. A human could try, but it is inevitable that they would on occasion forget (see Wordpress’s history of security holes).
We can’t trivially copy our skills—if you need two humans who can understand a codebase, it takes approximately twice as long as it takes for one. If you want some help on a project, you end up spending a ton of time explaining the problem to the next person. You can’t just transfer your state over.
None of these things are “software”, in the sense of being modifiable. And they’re all things that would let self-improvement happen more quickly, and that a computer could change.
I should also mention that an AI with a FPGA could change its hardware. But I think this is a minor point; the flexibility of software is simply vastly higher than the flexibility of brains.