The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind.
If your argument is based on information capacity alone, it can be knocked down pretty easily. An AI can understand some small part of its design and improve that, then pick another part and improve that, etc. For example, if the AI is a computer program, it has a sure-fire way of improving itself without completely understanding its own design: build faster processors. Alternatively you could imagine a population of a million identical AIs working together on the problem of improving their common design. After all, humans can build aircraft carriers that are too complex to be understood by any single human. Actually I think today’s humanity is pretty close to understanding the human mind well enough to improve it.
I don’t think the number of AIs actually matters. If multiple AI’s can do a job, then a single AI should be able to simulate them as though it was multiple AI’s (or better yet just figure out how to do it on it’s own) and then do it as well. Another thing to note is that if the AI makes a copy of its program and puts it in external storage, it doesn’t add any extra complexity to itself. It can then run it’s optimization process on it, although I do agree that it would be more practical if it only improved parts of itself at a time.
It depends upon what designing a mind is like. How much minds intrinsically rely on interactions between parts and how far those interactions reach.
In the brain most of the interesting stuff such as science and the like is done by culturally created components. The evidence for this is the stark variety of the worldviews that exist in the world and have existed in history (with most of the same genes) and the ways those views made those that hold them interact with the world.
Making a powerful AI, in this view, is not just a problem of making a system with lots of hardware or the right algorithms from birth; it is a problem of making a system with the right ideas. And ideas interact heavily in the brain. They can squash or encourage each other. If one idea goes, others that rely on it might go as well.
I suspect that we might be close to making the human mind able to store more ideas or make the ideas process more quickly. How much that will lead to the creation of better ideas I don’t know. That is will we get a feedback loop? We might just get better at storing gossip and social information.
If your argument is based on information capacity alone, it can be knocked down pretty easily. An AI can understand some small part of its design and improve that, then pick another part and improve that, etc. For example, if the AI is a computer program, it has a sure-fire way of improving itself without completely understanding its own design: build faster processors. Alternatively you could imagine a population of a million identical AIs working together on the problem of improving their common design. After all, humans can build aircraft carriers that are too complex to be understood by any single human. Actually I think today’s humanity is pretty close to understanding the human mind well enough to improve it.
I don’t think the number of AIs actually matters. If multiple AI’s can do a job, then a single AI should be able to simulate them as though it was multiple AI’s (or better yet just figure out how to do it on it’s own) and then do it as well. Another thing to note is that if the AI makes a copy of its program and puts it in external storage, it doesn’t add any extra complexity to itself. It can then run it’s optimization process on it, although I do agree that it would be more practical if it only improved parts of itself at a time.
You’re right, I used the million AIs as an intuition pump, imitating Eliezer’s That Alien Message.
It depends upon what designing a mind is like. How much minds intrinsically rely on interactions between parts and how far those interactions reach.
In the brain most of the interesting stuff such as science and the like is done by culturally created components. The evidence for this is the stark variety of the worldviews that exist in the world and have existed in history (with most of the same genes) and the ways those views made those that hold them interact with the world.
Making a powerful AI, in this view, is not just a problem of making a system with lots of hardware or the right algorithms from birth; it is a problem of making a system with the right ideas. And ideas interact heavily in the brain. They can squash or encourage each other. If one idea goes, others that rely on it might go as well.
I suspect that we might be close to making the human mind able to store more ideas or make the ideas process more quickly. How much that will lead to the creation of better ideas I don’t know. That is will we get a feedback loop? We might just get better at storing gossip and social information.