...we never pry open the black box and scale the brain bigger or redesign its software or even just speed up the damn thing.
How would this be done? In our current economy, humans all run similar software on similar hardware. Yet we still have difficulty understanding each other, and even two tradesmen of the same culture, gender, and who grew up in the same neighborhood have knowledge of their trade the other cannot even likely understand (or they may not even know that they have). We’re far from being able to tinker around in each other’s heads. Even if we had the physical ability to alter others’ thought processes, its not clear that doing so (outside of increasing connectivity; I would love to have my brain wired up to a PC and the Internet) would produce good results. Even cognitive biases which are so obviously irrational have purposes which are often beneficial (even if only individually), and I don’t think we could predict the effects of eliminating them en mass.
AI could presumably be much more flexible than human minds. An AI which specialized in designing new computer processors probably wouldn’t have any concept of the wind, sunshine, biological reproduction or the solar system. Who would improve upon it? Being that it specialized in computer hardware, it would likely be able to make other AIs (and itself) faster by upgrading their hardware, but could it improve upon their logical processes? Beyond improving speed, could it do anything to an AI which designed solar panels?
In short, I see the amount of “Hayekian” knowledge in an AI society to be far, far greater than a human one, due to the flexibility of hardware and software that AI would allow over the single blueprint of a human mind. AIs would have to agree on a general set of norms in order to work together, norms most might not have any understanding of beyond the need to follow them. I think this could produce a society where humans are protected.
Though I don’t know anything about the plausibility of a single self-improving AGI being able to compete with (or conquer?) a myriad of specialized AIs. I can’t see how the AGI would be able to make other AIs smarter, but I could see how it might manipulate or control them.
How would this be done? In our current economy, humans all run similar software on similar hardware. Yet we still have difficulty understanding each other, and even two tradesmen of the same culture, gender, and who grew up in the same neighborhood have knowledge of their trade the other cannot even likely understand (or they may not even know that they have). We’re far from being able to tinker around in each other’s heads. Even if we had the physical ability to alter others’ thought processes, its not clear that doing so (outside of increasing connectivity; I would love to have my brain wired up to a PC and the Internet) would produce good results. Even cognitive biases which are so obviously irrational have purposes which are often beneficial (even if only individually), and I don’t think we could predict the effects of eliminating them en mass.
AI could presumably be much more flexible than human minds. An AI which specialized in designing new computer processors probably wouldn’t have any concept of the wind, sunshine, biological reproduction or the solar system. Who would improve upon it? Being that it specialized in computer hardware, it would likely be able to make other AIs (and itself) faster by upgrading their hardware, but could it improve upon their logical processes? Beyond improving speed, could it do anything to an AI which designed solar panels?
In short, I see the amount of “Hayekian” knowledge in an AI society to be far, far greater than a human one, due to the flexibility of hardware and software that AI would allow over the single blueprint of a human mind. AIs would have to agree on a general set of norms in order to work together, norms most might not have any understanding of beyond the need to follow them. I think this could produce a society where humans are protected.
Though I don’t know anything about the plausibility of a single self-improving AGI being able to compete with (or conquer?) a myriad of specialized AIs. I can’t see how the AGI would be able to make other AIs smarter, but I could see how it might manipulate or control them.